Datasets:
Tasks:
Text Classification
Sub-tasks:
natural-language-inference
Languages:
Japanese
Size:
10K - 100K
License:
language: | |
- ja | |
language_creators: | |
- other | |
multilinguality: | |
- monolingual | |
pretty_name: JaNLI | |
task_categories: | |
- text-classification | |
task_ids: | |
- natural-language-inference | |
license: cc-by-sa-4.0 | |
# Dataset Card for JaNLI | |
## Table of Contents | |
- [Dataset Card for JaNLI](#dataset-card-for-janli) | |
- [Table of Contents](#table-of-contents) | |
- [Dataset Description](#dataset-description) | |
- [Dataset Summary](#dataset-summary) | |
- [Languages](#languages) | |
- [Dataset Structure](#dataset-structure) | |
- [Data Instances](#data-instances) | |
- [base](#base) | |
- [original](#original) | |
- [Data Fields](#data-fields) | |
- [base](#base-1) | |
- [original](#original-1) | |
- [Data Splits](#data-splits) | |
- [Annotations](#annotations) | |
- [Additional Information](#additional-information) | |
- [Licensing Information](#licensing-information) | |
- [Citation Information](#citation-information) | |
- [Contributions](#contributions) | |
## Dataset Description | |
- **Homepage:** https://github.com/verypluming/JaNLI | |
- **Repository:** https://github.com/verypluming/JaNLI | |
- **Paper:** https://aclanthology.org/2021.blackboxnlp-1.26/ | |
### Dataset Summary | |
The JaNLI (Japanese Adversarial NLI) dataset, inspired by the English HANS dataset, is designed to necessitate an understanding of Japanese linguistic phenomena and to illuminate the vulnerabilities of models. | |
### Languages | |
The language data in JaNLI is in Japanese (BCP-47 [ja-JP](https://www.rfc-editor.org/info/bcp47)). | |
## Dataset Structure | |
### Data Instances | |
When loading a specific configuration, users has to append a version dependent suffix: | |
```python | |
import datasets as ds | |
dataset: ds.DatasetDict = ds.load_dataset("hpprc/janli") | |
print(dataset) | |
# DatasetDict({ | |
# train: Dataset({ | |
# features: ['id', 'premise', 'hypothesis', 'label', 'heuristics', 'number_of_NPs', 'semtag'], | |
# num_rows: 13680 | |
# }) | |
# test: Dataset({ | |
# features: ['id', 'premise', 'hypothesis', 'label', 'heuristics', 'number_of_NPs', 'semtag'], | |
# num_rows: 720 | |
# }) | |
# }) | |
dataset: ds.DatasetDict = ds.load_dataset("hpprc/janli", name="original") | |
print(dataset) | |
# DatasetDict({ | |
# train: Dataset({ | |
# features: ['id', 'sentence_A_Ja', 'sentence_B_Ja', 'entailment_label_Ja', 'heuristics', 'number_of_NPs', 'semtag'], | |
# num_rows: 13680 | |
# }) | |
# test: Dataset({ | |
# features: ['id', 'sentence_A_Ja', 'sentence_B_Ja', 'entailment_label_Ja', 'heuristics', 'number_of_NPs', 'semtag'], | |
# num_rows: 720 | |
# }) | |
# }) | |
``` | |
#### base | |
An example of looks as follows: | |
```json | |
{ | |
'id': 12, | |
'premise': 'θ₯θ γγγγγγΌγ«ιΈζγθ¦γ¦γγ', | |
'hypothesis': 'γγγγγΌγ«ιΈζγθ₯θ γθ¦γ¦γγ', | |
'label': 0, | |
'heuristics': 'overlap-full', | |
'number_of_NPs': 2, | |
'semtag': 'scrambling' | |
} | |
``` | |
#### original | |
An example of looks as follows: | |
```json | |
{ | |
'id': 12, | |
'sentence_A_Ja': 'θ₯θ γγγγγγΌγ«ιΈζγθ¦γ¦γγ', | |
'sentence_B_Ja': 'γγγγγΌγ«ιΈζγθ₯θ γθ¦γ¦γγ', | |
'entailment_label_Ja': 0, | |
'heuristics': 'overlap-full', | |
'number_of_NPs': 2, | |
'semtag': 'scrambling' | |
} | |
``` | |
### Data Fields | |
#### base | |
A version adopting the column names of a typical NLI dataset. | |
| Name | Description | | |
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | |
| id | The number of the sentence pair. | | |
| premise | The premise (sentence_A_Ja). | | |
| hypothesis | The hypothesis (sentence_B_Ja). | | |
| label | The correct label for the sentence pair (either `entailment` or `non-entailment`); in the setting described in the paper, non-entailment = neutral + contradiction (entailment_label_Ja). | | |
| heuristics | The heuristics (structural pattern) tag. The tags are: subsequence, constituent, full-overlap, order-subset, and mixed-subset. | | |
| number_of_NPs | The number of noun phrase in a sentence. | | |
| semtag | The linguistic phenomena tag. | | |
#### original | |
The original version retaining the unaltered column names. | |
| Name | Description | | |
| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | |
| id | The number of the sentence pair. | | |
| sentence_A_Ja | The premise. | | |
| sentence_B_Ja | The hypothesis. | | |
| entailment_label_Ja | The correct label for this sentence pair (either `entailment` or `non-entailment`); in the setting described in the paper, non-entailment = neutral + contradiction | | |
| heuristics | The heuristics (structural pattern) tag. The tags are: subsequence, constituent, full-overlap, order-subset, and mixed-subset. | | |
| number_of_NPs | The number of noun phrase in a sentence. | | |
| semtag | The linguistic phenomena tag. | | |
### Data Splits | |
| name | train | validation | test | | |
| -------- | -----: | ---------: | ---: | | |
| base | 13,680 | | 720 | | |
| original | 13,680 | | 720 | | |
### Annotations | |
The annotation process for this Japanese NLI dataset involves tagging each pair (P, H) of a premise and hypothesis with a label for structural pattern and linguistic phenomenon. | |
The structural relationship between premise and hypothesis sentences is classified into five patterns, with each pattern associated with a type of heuristic that can lead to incorrect predictions of the entailment relation. | |
Additionally, 11 categories of Japanese linguistic phenomena and constructions are focused on for generating the five patterns of adversarial inferences. | |
For each linguistic phenomenon, a template for the premise sentence P is fixed, and multiple templates for hypothesis sentences H are created. | |
In total, 144 templates for (P, H) pairs are produced. | |
Each pair of premise and hypothesis sentences is tagged with an entailment label (`entailment` or `non-entailment`), a structural pattern, and a linguistic phenomenon label. | |
The JaNLI dataset is generated by instantiating each template 100 times, resulting in a total of 14,400 examples. | |
The same number of entailment and non-entailment examples are generated for each phenomenon. | |
The structural patterns are annotated with the templates for each linguistic phenomenon, and the ratio of `entailment` and `non-entailment` examples is not necessarily 1:1 for each pattern. | |
The dataset uses a total of 158 words (nouns and verbs), which occur more than 20 times in the JSICK and JSNLI datasets. | |
## Additional Information | |
- [verypluming/JaNLI](https://github.com/verypluming/JaNLI) | |
- [Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference](https://aclanthology.org/2021.blackboxnlp-1.26/) | |
### Licensing Information | |
CC BY-SA 4.0 | |
### Citation Information | |
```bibtex | |
@InProceedings{yanaka-EtAl:2021:blackbox, | |
author = {Yanaka, Hitomi and Mineshima, Koji}, | |
title = {Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference}, | |
booktitle = {Proceedings of the 2021 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP2021)}, | |
url = {https://aclanthology.org/2021.blackboxnlp-1.26/}, | |
year = {2021}, | |
} | |
``` | |
### Contributions | |
Thanks to [Hitomi Yanaka](https://hitomiyanaka.mystrikingly.com/) and [Koji Mineshima](https://abelard.flet.keio.ac.jp/person/minesima/index-j.html) for creating this dataset. | |