Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Japanese
Libraries:
Datasets
pandas
License:
JSQuAD / README.md
teruo6939's picture
Update README.md
b6d4947 verified
---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: is_impossible
dtype: bool
splits:
- name: train
num_bytes: 42926273
num_examples: 62859
- name: validation
num_bytes: 3210694
num_examples: 4442
download_size: 27872843
dataset_size: 46136967
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- ja
---
評価スコアの再現性確保と SB Intuitions 修正版の公開用クローン
ソース: [yahoojapan/JGLUE on GitHub](https://github.com/yahoojapan/JGLUE/tree/main)
# JSQuAD
> JSQuAD is a Japanese version of SQuAD (Rajpurkar+, 2016), one of the datasets of reading comprehension.
> Each instance in the dataset consists of a question regarding a given context (Wikipedia article) and its answer.
> JSQuAD is based on SQuAD 1.1 (there are no unanswerable questions).
> We used the Japanese Wikipedia dump as of 20211101.
## Licensing Information
[Creative Commons Attribution Share Alike 4.0 International](https://github.com/yahoojapan/JGLUE/blob/main/LICENSE)
- [datasets/jsquad-v1.1 on GitHub](https://github.com/yahoojapan/JGLUE/tree/main/datasets/jsquad-v1.1)
## Citation Information
```
@article{栗原 健太郎2023,
title={JGLUE: 日本語言語理解ベンチマーク},
author={栗原 健太郎 and 河原 大輔 and 柴田 知秀},
journal={自然言語処理},
volume={30},
number={1},
pages={63-87},
year={2023},
url = "https://www.jstage.jst.go.jp/article/jnlp/30/1/30_63/_article/-char/ja",
doi={10.5715/jnlp.30.63}
}
@inproceedings{kurihara-etal-2022-jglue,
title = "{JGLUE}: {J}apanese General Language Understanding Evaluation",
author = "Kurihara, Kentaro and
Kawahara, Daisuke and
Shibata, Tomohide",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.317",
pages = "2957--2966",
abstract = "To develop high-performance natural language understanding (NLU) models, it is necessary to have a benchmark to evaluate and analyze NLU ability from various perspectives. While the English NLU benchmark, GLUE, has been the forerunner, benchmarks are now being released for languages other than English, such as CLUE for Chinese and FLUE for French; but there is no such benchmark for Japanese. We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.",
}
@InProceedings{Kurihara_nlp2022,
author = "栗原健太郎 and 河原大輔 and 柴田知秀",
title = "JGLUE: 日本語言語理解ベンチマーク",
booktitle = "言語処理学会第28回年次大会",
year = "2022",
url = "https://www.anlp.jp/proceedings/annual_meeting/2022/pdf_dir/E8-4.pdf"
note= "in Japanese"
}
```
# Subsets
## default
- `id` (`str`): id of a question
- `title` (`str`): title of a Wikipedia article, NFKC正規化済み
- `context` (`str`): a concatenation of the title and paragraph, NFKC正規化済み
- `question`(`str`): question, NFKC正規化済み
- `answers`(`dict{answer_start(int), text(str)}`): a set of answers
- start position (character index)
- answer text, NFKC正規化済み
- `is_impossible`(`bool`): all the values are false