|
--- |
|
license: cc-by-sa-4.0 |
|
task_categories: |
|
- text-classification |
|
language: |
|
- cs |
|
pretty_name: Czech SNLI |
|
--- |
|
# Dataset Card for Czech SNLI |
|
|
|
Czech translation of the [Stanford Natural Language Interface](https://nlp.stanford.edu/projects/snli/) (SNLI) dataset with manual annotation of a SNLI subset. |
|
In addition to the entailment/contradiction/neutral inference, a "bad translation" class was added. |
|
|
|
The annotation was done by students of NLP or computational linguistics. 1499 same pairs were annotated by two students to check IAA. |
|
|
|
## Dataset Details |
|
|
|
The annotation for Czech premise-hypothesis pairs is done on 165390 pairs from train, test, and dev parts of the SNLI in the following distribution: |
|
|
|
- train: 159650 |
|
- dev: 2860 |
|
- test: 2880 |
|
|
|
The premise-hypothesis pairs were translated using the LINDAT Translation at https://lindat.mff.cuni.cz/services/translation/. |
|
The CUBBITT model was published as: |
|
|
|
Popel, M., Tomkova, M., Tomek, J. et al. Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals. Nat Commun 11, 4381 (2020). https://doi.org/10.1038/s41467-020-18073-9 |
|
|
|
## Annotation |
|
|
|
From the 165390 pairs, 151470 (91.58%) were considered understandable (i.e., they were not marked as "bad translation" but the translation may not be accurate enough to preserve the entailment). |
|
|
|
|
|
## Inter-Annotator Agreement |
|
|
|
Two random annotators obtained the same dataset. The kappa score is 0.67 (substantial agreement). |
|
|
|
Confusion matrix |
|
|
|
 |
|
|
|
|
|
Full report on the agreement |
|
|
|
``` |
|
Simple Kappa Coefficient |
|
-------------------------------- |
|
Kappa 0.6757 |
|
ASE 0.0146 |
|
95% Lower Conf Limit 0.6470 |
|
95% Upper Conf Limit 0.7044 |
|
|
|
Test of H0: Simple Kappa = 0 |
|
|
|
ASE under H0 0.0154 |
|
Z 43.9031 |
|
One-sided Pr > Z 0.0000 |
|
Two-sided Pr > |Z| 0.0000 |
|
``` |
|
## Dataset Formats |
|
|
|
Dataset is available as TSV and JSONL. |
|
|
|
The JSONL version only contains pairs that were *not* annotated as "bad translation". |
|
In case of multiple annotations, only the *agreed* pairs (where both annotators agreed) are selected. |
|
The JSONL contains 149660 sentence pairs. |