Update README.md
Browse files
README.md
CHANGED
@@ -6,8 +6,17 @@ license: mit
|
|
6 |
We provide the train, dev, and test sets. For more details, find our report [here](https://github.com/rish-16/cs4248-project/blob/main/CS4248_Group19_Final_Report.pdf).
|
7 |
|
8 |
## Dataset details
|
|
|
9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
|
|
11 |
|
12 |
## Download Instructions
|
13 |
To access MLe-SNLI, you can use the HuggingFace Datasets API to load the dataset:
|
@@ -16,13 +25,26 @@ To access MLe-SNLI, you can use the HuggingFace Datasets API to load the dataset
|
|
16 |
from datasets import load_dataset
|
17 |
|
18 |
mle_snli = load_dataset("rish16/MLe-SNLI") # loads a DatasetDict object
|
|
|
19 |
train_data = mle_snli['train'] # 500K samples (100K per lang)
|
20 |
dev_data = mle_snli['dev'] # 49120 samples (9824 per lang)
|
21 |
test_data = mle_snli['test'] # 49120 samples (9824 per lang)
|
22 |
|
23 |
-
|
24 |
print (mle_snli)
|
25 |
"""
|
26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
"""
|
28 |
```
|
|
|
6 |
We provide the train, dev, and test sets. For more details, find our report [here](https://github.com/rish-16/cs4248-project/blob/main/CS4248_Group19_Final_Report.pdf).
|
7 |
|
8 |
## Dataset details
|
9 |
+
MLe-SNLI contains 500K training (`train`) samples of premise-hypothesis pairs along with their associated label and explanation. We take 100K training samples from the original e-SNLI (Camburu et al., 2018) dataset and translate them into 4 other languages (Spanish, German, Dutch, and French). We do the same for all 9824 testing (`test`) and validation (`dev`) samples, giving us 49120 samples for both `test` and `dev` splits.
|
10 |
|
11 |
+
| Column | Description |
|
12 |
+
|-----------------|---------------------------------------------------------------------------------|
|
13 |
+
| `premise` | Natural language premise sentence |
|
14 |
+
| `hypothesis` | Natural language hypothesis sentence |
|
15 |
+
| `label` | From `entailment`, `contradiction`, or `neutral` |
|
16 |
+
| `explanation_1` | Natural language justification for `label` |
|
17 |
+
| `language` | From English (`en`), Spanish (`es`), German (`de`), Dutch (`nl`), French (`fr`) |
|
18 |
|
19 |
+
> **WARNING:** the translation quality of MLe-SNLI may be compromised for some natural language samples because of quality issues in the original e-SNLI dataset that were not addressed in our [work](https://github.com/rish-16/cs4248-project). Use it at your own discretion.
|
20 |
|
21 |
## Download Instructions
|
22 |
To access MLe-SNLI, you can use the HuggingFace Datasets API to load the dataset:
|
|
|
25 |
from datasets import load_dataset
|
26 |
|
27 |
mle_snli = load_dataset("rish16/MLe-SNLI") # loads a DatasetDict object
|
28 |
+
|
29 |
train_data = mle_snli['train'] # 500K samples (100K per lang)
|
30 |
dev_data = mle_snli['dev'] # 49120 samples (9824 per lang)
|
31 |
test_data = mle_snli['test'] # 49120 samples (9824 per lang)
|
32 |
|
|
|
33 |
print (mle_snli)
|
34 |
"""
|
35 |
+
DatasetDict({
|
36 |
+
train: Dataset({
|
37 |
+
features: ['premise', 'hypothesis', 'label', 'explanation_1', 'language'],
|
38 |
+
num_rows: 500000
|
39 |
+
})
|
40 |
+
test: Dataset({
|
41 |
+
features: ['premise', 'hypothesis', 'label', 'explanation_1', 'language'],
|
42 |
+
num_rows: 49120
|
43 |
+
})
|
44 |
+
validation: Dataset({
|
45 |
+
features: ['premise', 'hypothesis', 'label', 'explanation_1', 'language'],
|
46 |
+
num_rows: 49210
|
47 |
+
})
|
48 |
+
})
|
49 |
"""
|
50 |
```
|