Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -34,10 +34,61 @@ configs:
|
|
34 |
path: data/validation-*
|
35 |
- split: test
|
36 |
path: data/test-*
|
37 |
-
license: cc-by-4.0
|
38 |
task_categories:
|
39 |
- text2text-generation
|
40 |
language:
|
41 |
- en
|
42 |
pretty_name: MinWikiSplit++
|
43 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
path: data/validation-*
|
35 |
- split: test
|
36 |
path: data/test-*
|
37 |
+
license: cc-by-sa-4.0
|
38 |
task_categories:
|
39 |
- text2text-generation
|
40 |
language:
|
41 |
- en
|
42 |
pretty_name: MinWikiSplit++
|
43 |
+
---
|
44 |
+
|
45 |
+
|
46 |
+
# MinWikiSplit++
|
47 |
+
|
48 |
+
This dataset is the HuggingFace version of MinWikiSplit++.
|
49 |
+
MinWikiSplit++ enhances the original [MinWikiSplit](https://aclanthology.org/W19-8615/) by applying two techniques: filtering through NLI classification and sentence-order reversing, which help to remove noise and reduce hallucinations compared to the original MinWikiSplit.
|
50 |
+
The preprocessed MinWikiSplit dataset that formed the basis for this can be found [here](https://huggingface.co/datasets/cl-nagoya/min-wikisplit).
|
51 |
+
|
52 |
+
## Usage
|
53 |
+
|
54 |
+
```python
|
55 |
+
import datasets as ds
|
56 |
+
|
57 |
+
dataset: ds.DatasetDict = ds.load_dataset("cl-nagoya/min-wikisplit-pp")
|
58 |
+
|
59 |
+
print(dataset)
|
60 |
+
|
61 |
+
# DatasetDict({
|
62 |
+
# train: Dataset({
|
63 |
+
# features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob'],
|
64 |
+
# num_rows: 139241
|
65 |
+
# })
|
66 |
+
# validation: Dataset({
|
67 |
+
# features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob'],
|
68 |
+
# num_rows: 17424
|
69 |
+
# })
|
70 |
+
# test: Dataset({
|
71 |
+
# features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob'],
|
72 |
+
# num_rows: 17412
|
73 |
+
# })
|
74 |
+
# })
|
75 |
+
|
76 |
+
```
|
77 |
+
|
78 |
+
### Data Fields
|
79 |
+
|
80 |
+
- id: The ID of the data (note that it is not compatible with the existing MinWikiSplit)
|
81 |
+
- complex: A complex sentence
|
82 |
+
- simple_reversed: Simple sentences with their order reversed
|
83 |
+
- simple_tokenized: A list of simple sentences split by [PySBD](https://github.com/nipunsadvilkar/pySBD), not reversed in order
|
84 |
+
- simple_original: Simple sentences in their original order
|
85 |
+
- entailment_prob: The average probability that each simple sentence is classified as an entailment according to the complex sentence. [DeBERTa-xxl](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli) is used for the NLI classification.
|
86 |
+
|
87 |
+
## Paper
|
88 |
+
|
89 |
+
Tsukagoshi et al., [WikiSplit++: Easy Data Refinement for Split and Rephrase](https://arxiv.org/abs/2404.09002), LREC-COLING 2024.
|
90 |
+
|
91 |
+
## License
|
92 |
+
|
93 |
+
MinWikiSplit is build upon the [WikiSplit](https://github.com/google-research-datasets/wiki-split) dataset, which is distributed under the CC-BY-SA 4.0 license.
|
94 |
+
Therefore, this dataset follows suit and is distributed under the CC-BY-SA 4.0 license.
|