cdsc / README.md
albertvillanova's picture
Convert dataset to Parquet (#3)
b540105 verified
---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: polish-cdscorpus
pretty_name: Polish CDSCorpus
tags:
- sentences entailment and relatedness
dataset_info:
- config_name: cdsc-e
features:
- name: pair_ID
dtype: int32
- name: sentence_A
dtype: string
- name: sentence_B
dtype: string
- name: entailment_judgment
dtype:
class_label:
names:
'0': NEUTRAL
'1': CONTRADICTION
'2': ENTAILMENT
splits:
- name: train
num_bytes: 1381894
num_examples: 8000
- name: test
num_bytes: 179392
num_examples: 1000
- name: validation
num_bytes: 174654
num_examples: 1000
download_size: 744169
dataset_size: 1735940
- config_name: cdsc-r
features:
- name: pair_ID
dtype: int32
- name: sentence_A
dtype: string
- name: sentence_B
dtype: string
- name: relatedness_score
dtype: float32
splits:
- name: train
num_bytes: 1349894
num_examples: 8000
- name: test
num_bytes: 175392
num_examples: 1000
- name: validation
num_bytes: 170654
num_examples: 1000
download_size: 747648
dataset_size: 1695940
configs:
- config_name: cdsc-e
data_files:
- split: train
path: cdsc-e/train-*
- split: test
path: cdsc-e/test-*
- split: validation
path: cdsc-e/validation-*
- config_name: cdsc-r
data_files:
- split: train
path: cdsc-r/train-*
- split: test
path: cdsc-r/test-*
- split: validation
path: cdsc-r/validation-*
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://zil.ipipan.waw.pl/Scwad/CDSCorpus
- **Repository:**
- **Paper:** https://aclanthology.org/P17-1073/
- **Leaderboard:** https://klejbenchmark.com/leaderboard/
- **Point of Contact:** [Alina Wróblewska](mailto:[email protected])
### Dataset Summary
Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Polish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- pair_ID: id of sentences pairs
- sentence_A: first sentence
- sentence_B: second sentence
for cdsc-e domain:
- entailment_judgment: either 'NEUTRAL', 'CONTRADICTION' or 'ENTAILMENT'
for cdsc-r domain:
- relatedness_score: float representing a reletedness
### Data Splits
Data is splitted in train/dev/test split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0
### Citation Information
```
@inproceedings{wroblewska-krasnowska-kieras-2017-polish,
title = "{P}olish evaluation dataset for compositional distributional semantics models",
author = "Wr{\'o}blewska, Alina and
Krasnowska-Kiera{\'s}, Katarzyna",
editor = "Barzilay, Regina and
Kan, Min-Yen",
booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P17-1073",
doi = "10.18653/v1/P17-1073",
pages = "784--792",
abstract = "The paper presents a procedure of building an evaluation dataset. for the validation of compositional distributional semantics models estimated for languages other than English. The procedure generally builds on steps designed to assemble the SICK corpus, which contains pairs of English sentences annotated for semantic relatedness and entailment, because we aim at building a comparable dataset. However, the implementation of particular building steps significantly differs from the original SICK design assumptions, which is caused by both lack of necessary extraneous resources for an investigated language and the need for language-specific transformation rules. The designed procedure is verified on Polish, a fusional language with a relatively free word order, and contributes to building a Polish evaluation dataset. The resource consists of 10K sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish.",
}
```
### Contributions
Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.