Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Japanese
Libraries:
Datasets
pandas
License:
File size: 3,262 Bytes
ad069eb
 
3e5c782
ad069eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a01f576
 
 
 
 
ad069eb
 
 
 
 
a01f576
 
ad069eb
8632a3e
 
 
 
 
 
 
 
2573dce
 
8632a3e
 
 
6f9a341
 
 
 
8632a3e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
dataset_info:
- config_name: default
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: context
    dtype: string
  - name: question
    dtype: string
  - name: answers
    struct:
    - name: answer_start
      sequence: int64
    - name: text
      sequence: string
  - name: is_impossible
    dtype: bool
  splits:
  - name: train
    num_bytes: 43238824
    num_examples: 62859
  - name: validation
    num_bytes: 3233443
    num_examples: 4442
  download_size: 9405217
  dataset_size: 46472267
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
---


評価スコアの再現性確保と SB Intuitions 修正版の公開用クローン

ソース: [yahoojapan/JGLUE](https://github.com/yahoojapan/JGLUE/tree/main)

## JCommonsenseQA

> JCommonsenseQA is a Japanese version of CommonsenseQA (Talmor+, 2019), which is a multiple-choice question answering dataset that requires commonsense reasoning ability.
> It is built using crowdsourcing with seeds extracted from the knowledge base ConceptNet.

### Data Fields

- `q_id` (`str`): id
- `question` (`str`): question
- `choice{0..4}` (`list[str]`): choice
- `label` (`int`): correct choice id

### Licensing Information

[Creative Commons Attribution Share Alike 4.0 International](https://github.com/yahoojapan/JGLUE/blob/main/LICENSE)

### Citation Information

```
@article{栗原 健太郎2023,
  title={JGLUE: 日本語言語理解ベンチマーク},
  author={栗原 健太郎 and 河原 大輔 and 柴田 知秀},
  journal={自然言語処理},
  volume={30},
  number={1},
  pages={63-87},
  year={2023},
  url = "https://www.jstage.jst.go.jp/article/jnlp/30/1/30_63/_article/-char/ja",
  doi={10.5715/jnlp.30.63}
}

@inproceedings{kurihara-etal-2022-jglue,
    title = "{JGLUE}: {J}apanese General Language Understanding Evaluation",
    author = "Kurihara, Kentaro  and
      Kawahara, Daisuke  and
      Shibata, Tomohide",
    booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
    month = jun,
    year = "2022",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://aclanthology.org/2022.lrec-1.317",
    pages = "2957--2966",
    abstract = "To develop high-performance natural language understanding (NLU) models, it is necessary to have a benchmark to evaluate and analyze NLU ability from various perspectives. While the English NLU benchmark, GLUE, has been the forerunner, benchmarks are now being released for languages other than English, such as CLUE for Chinese and FLUE for French; but there is no such benchmark for Japanese. We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.",
}

@InProceedings{Kurihara_nlp2022,
  author = 	"栗原健太郎 and 河原大輔 and 柴田知秀",
  title = 	"JGLUE: 日本語言語理解ベンチマーク",
  booktitle = 	"言語処理学会第28回年次大会",
  year =	"2022",
  url = "https://www.anlp.jp/proceedings/annual_meeting/2022/pdf_dir/E8-4.pdf"
  note= "in Japanese"
}
```