Datasets:
Tasks:
Text Classification
Sub-tasks:
natural-language-inference
Languages:
English
Size:
10K<n<100K
ArXiv:
License:

albertvillanova
HF staff
Convert dataset sizes from base 2 to base 10 in the dataset card (#2)
9f42982
annotations_creators: | |
- expert-generated | |
language_creators: | |
- expert-generated | |
language: | |
- en | |
license: | |
- unknown | |
multilinguality: | |
- monolingual | |
size_categories: | |
- 10K<n<100K | |
source_datasets: | |
- original | |
task_categories: | |
- text-classification | |
task_ids: | |
- natural-language-inference | |
paperswithcode_id: hans | |
pretty_name: Heuristic Analysis for NLI Systems | |
dataset_info: | |
features: | |
- name: premise | |
dtype: string | |
- name: hypothesis | |
dtype: string | |
- name: label | |
dtype: | |
class_label: | |
names: | |
'0': entailment | |
'1': non-entailment | |
- name: parse_premise | |
dtype: string | |
- name: parse_hypothesis | |
dtype: string | |
- name: binary_parse_premise | |
dtype: string | |
- name: binary_parse_hypothesis | |
dtype: string | |
- name: heuristic | |
dtype: string | |
- name: subcase | |
dtype: string | |
- name: template | |
dtype: string | |
config_name: plain_text | |
splits: | |
- name: train | |
num_bytes: 15916371 | |
num_examples: 30000 | |
- name: validation | |
num_bytes: 15893137 | |
num_examples: 30000 | |
download_size: 30947358 | |
dataset_size: 31809508 | |
# Dataset Card for "hans" | |
## Table of Contents | |
- [Dataset Description](#dataset-description) | |
- [Dataset Summary](#dataset-summary) | |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) | |
- [Languages](#languages) | |
- [Dataset Structure](#dataset-structure) | |
- [Data Instances](#data-instances) | |
- [Data Fields](#data-fields) | |
- [Data Splits](#data-splits) | |
- [Dataset Creation](#dataset-creation) | |
- [Curation Rationale](#curation-rationale) | |
- [Source Data](#source-data) | |
- [Annotations](#annotations) | |
- [Personal and Sensitive Information](#personal-and-sensitive-information) | |
- [Considerations for Using the Data](#considerations-for-using-the-data) | |
- [Social Impact of Dataset](#social-impact-of-dataset) | |
- [Discussion of Biases](#discussion-of-biases) | |
- [Other Known Limitations](#other-known-limitations) | |
- [Additional Information](#additional-information) | |
- [Dataset Curators](#dataset-curators) | |
- [Licensing Information](#licensing-information) | |
- [Citation Information](#citation-information) | |
- [Contributions](#contributions) | |
## Dataset Description | |
- **Homepage:** [https://github.com/tommccoy1/hans](https://github.com/tommccoy1/hans) | |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
- **Size of downloaded dataset files:** 30.94 MB | |
- **Size of the generated dataset:** 31.81 MB | |
- **Total amount of disk used:** 62.76 MB | |
### Dataset Summary | |
The HANS dataset is an NLI evaluation set that tests specific hypotheses about invalid heuristics that NLI models are likely to learn. | |
### Supported Tasks and Leaderboards | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Languages | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
## Dataset Structure | |
### Data Instances | |
#### plain_text | |
- **Size of downloaded dataset files:** 30.94 MB | |
- **Size of the generated dataset:** 31.81 MB | |
- **Total amount of disk used:** 62.76 MB | |
An example of 'train' looks as follows. | |
``` | |
``` | |
### Data Fields | |
The data fields are the same among all splits. | |
#### plain_text | |
- `premise`: a `string` feature. | |
- `hypothesis`: a `string` feature. | |
- `label`: a classification label, with possible values including `entailment` (0), `non-entailment` (1). | |
- `parse_premise`: a `string` feature. | |
- `parse_hypothesis`: a `string` feature. | |
- `binary_parse_premise`: a `string` feature. | |
- `binary_parse_hypothesis`: a `string` feature. | |
- `heuristic`: a `string` feature. | |
- `subcase`: a `string` feature. | |
- `template`: a `string` feature. | |
### Data Splits | |
| name |train|validation| | |
|----------|----:|---------:| | |
|plain_text|30000| 30000| | |
## Dataset Creation | |
### Curation Rationale | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Source Data | |
#### Initial Data Collection and Normalization | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
#### Who are the source language producers? | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Annotations | |
#### Annotation process | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
#### Who are the annotators? | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Personal and Sensitive Information | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
## Considerations for Using the Data | |
### Social Impact of Dataset | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Discussion of Biases | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Other Known Limitations | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
## Additional Information | |
### Dataset Curators | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Licensing Information | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Citation Information | |
``` | |
@article{DBLP:journals/corr/abs-1902-01007, | |
author = {R. Thomas McCoy and | |
Ellie Pavlick and | |
Tal Linzen}, | |
title = {Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural | |
Language Inference}, | |
journal = {CoRR}, | |
volume = {abs/1902.01007}, | |
year = {2019}, | |
url = {http://arxiv.org/abs/1902.01007}, | |
archivePrefix = {arXiv}, | |
eprint = {1902.01007}, | |
timestamp = {Tue, 21 May 2019 18:03:36 +0200}, | |
biburl = {https://dblp.org/rec/journals/corr/abs-1902-01007.bib}, | |
bibsource = {dblp computer science bibliography, https://dblp.org} | |
} | |
``` | |
### Contributions | |
Thanks to [@TevenLeScao](https://github.com/TevenLeScao), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |