Update README.md
Browse files
README.md
CHANGED
@@ -83,33 +83,31 @@ The description of each key (when the instance contains two questions) is as fol
|
|
83 |
- **tag:** Type of tag of the article (Inconsistent/Self-contradictory/Contradict-other).
|
84 |
- **tagDate:** Date of the tag.
|
85 |
- **tagReason:** Reason for the tag.
|
86 |
-
- **
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
- **question2_answer1:** Gold answer to question 2 according to passage 1.
|
107 |
-
- **question2_answer2:** Gold answer to question 2 according to passage 2.
|
108 |
|
109 |
|
110 |
## Usage of the Dataset
|
111 |
|
112 |
-
We provide the following starter code
|
113 |
|
114 |
|
115 |
```python
|
@@ -148,15 +146,16 @@ parameters = TextGenerationParameters(
|
|
148 |
max_new_tokens=250,
|
149 |
min_new_tokens=1,
|
150 |
decoding_method=DecodingMethod.GREEDY,
|
151 |
-
# length_penalty=LengthPenalty(start_index=5, decay_factor=1.5),
|
152 |
return_options=TextGenerationReturnOptions(
|
153 |
# if ordered is False, you can use return_options to retrieve the corresponding prompt
|
154 |
input_text=True,
|
155 |
),
|
156 |
)
|
157 |
|
|
|
|
|
158 |
testingUnits = load_testingdata()
|
159 |
-
|
160 |
generateAnswers_bam_models(testingUnits)
|
161 |
```
|
162 |
|
@@ -246,7 +245,7 @@ If this dataset is utilized in your research, kindly cite the following paper:
|
|
246 |
|
247 |
**APA:**
|
248 |
|
249 |
-
|
250 |
|
251 |
<!-- ## Glossary [optional] -->
|
252 |
|
|
|
83 |
- **tag:** Type of tag of the article (Inconsistent/Self-contradictory/Contradict-other).
|
84 |
- **tagDate:** Date of the tag.
|
85 |
- **tagReason:** Reason for the tag.
|
86 |
+
- **wikitag_label_valid:** Valid or invalid tag (Valid/Invalid).
|
87 |
+
- **valid_comment:** Comment on the tag.
|
88 |
+
- **paragraphA_article:** Title of article containing passage 1.
|
89 |
+
- **paragraphA_information:** Relevant information of passage 1.
|
90 |
+
- **paragraphA_information_standalone:** Decontextualized relevant information of passage 1.
|
91 |
+
- **paragraphB_article:** Relevant information of passage 2.
|
92 |
+
- **paragraphB_information_standalone:** Decontextualized relevant information of passage 2.
|
93 |
+
- **wikitag_label_samepassage:** Boolean value stating whether passage 1 and passage 2 are the same (Same/Different).
|
94 |
+
- **relevantInfo_comment_A:** Comment on the information of passage 1.
|
95 |
+
- **relevantInfo_comment_B:** Comment on the information of passage 2.
|
96 |
+
- **Contradict type I:** Contradiction type I focuses on the fine-grained semantics of the contradiction, e.g., date/time, location, language, etc.
|
97 |
+
- **Contradict type II:** Contradiction type II focuses on the modality the contradiction. It describes the modality of passage 1 and passage 2, whether the information is from a piece of text, or from a row an infobox or a table.
|
98 |
+
- **Contradict type III:** Contradiction type III focuses on the source the contradiction. It describes whether passage 1 and passage 2 are from the same article or not.
|
99 |
+
- **Contradict type IV:** Contradiction type IV focuses on the reasoning aspect. It describes whether the contraction is explicit or implicit (Explicit/Implicit). Implicit contradiction requires some reasoning to understand why passage 1 and passage 2 are contradicted.
|
100 |
+
- **question1:** Question 1 inferred from the contradiction.
|
101 |
+
- **question1_answer1:** Gold answer to question 1 according to passage 1.
|
102 |
+
- **question1_answer2:** Gold answer to question 1 according to passage 2.
|
103 |
+
- **question2:** Question 2 inferred from the contradiction.
|
104 |
+
- **question2_answer1:** Gold answer to question 2 according to passage 1.
|
105 |
+
- **question2_answer2:** Gold answer to question 2 according to passage 2.
|
|
|
|
|
106 |
|
107 |
|
108 |
## Usage of the Dataset
|
109 |
|
110 |
+
We provide the following starter code. Please refer to the [GitHub repository](https://github.com/) for more information about the functions ```load_testingdata``` and ```generateAnswers_bam_models```.
|
111 |
|
112 |
|
113 |
```python
|
|
|
146 |
max_new_tokens=250,
|
147 |
min_new_tokens=1,
|
148 |
decoding_method=DecodingMethod.GREEDY,
|
|
|
149 |
return_options=TextGenerationReturnOptions(
|
150 |
# if ordered is False, you can use return_options to retrieve the corresponding prompt
|
151 |
input_text=True,
|
152 |
),
|
153 |
)
|
154 |
|
155 |
+
|
156 |
+
# load dataset
|
157 |
testingUnits = load_testingdata()
|
158 |
+
# test LLMs models
|
159 |
generateAnswers_bam_models(testingUnits)
|
160 |
```
|
161 |
|
|
|
245 |
|
246 |
**APA:**
|
247 |
|
248 |
+
Hou, Y., Pascale, A., Carnerero-Cano, J., Tchrakian, T., Marinescu, R., Daly, E., Padhi, I., & Sattigeri, P. (2024). WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia. arXiv preprint arXiv:2406.13805.
|
249 |
|
250 |
<!-- ## Glossary [optional] -->
|
251 |
|