system HF staff commited on
Commit
4a54db2
·
1 Parent(s): 067fa9b

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +296 -0
README.md ADDED
@@ -0,0 +1,296 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "indic_glue"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://indicnlp.ai4bharat.org/indic-glue/#natural-language-inference](https://indicnlp.ai4bharat.org/indic-glue/#natural-language-inference)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 3351.18 MB
37
+ - **Size of the generated dataset:** 1573.33 MB
38
+ - **Total amount of disk used:** 4924.51 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ IndicGLUE is a natural language understanding benchmark for Indian languages. It contains a wide
43
+ variety of tasks and covers 11 major Indian languages - as, bn, gu, hi, kn, ml, mr, or, pa, ta, te.
44
+
45
+ The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task
46
+ in which a system must read a sentence with a pronoun and select the referent of that pronoun from
47
+ a list of choices. The examples are manually constructed to foil simple statistical methods: Each
48
+ one is contingent on contextual information provided by a single word or phrase in the sentence.
49
+ To convert the problem into sentence pair classification, we construct sentence pairs by replacing
50
+ the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the
51
+ pronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of
52
+ new examples derived from fiction books that was shared privately by the authors of the original
53
+ corpus. While the included training set is balanced between two classes, the test set is imbalanced
54
+ between them (65% not entailment). Also, due to a data quirk, the development set is adversarial:
55
+ hypotheses are sometimes shared between training and development examples, so if a model memorizes the
56
+ training examples, they will predict the wrong label on corresponding development set
57
+ example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence
58
+ between a model's score on this task and its score on the unconverted original task. We
59
+ call converted dataset WNLI (Winograd NLI). This dataset is translated and publicly released for 3
60
+ Indian languages by AI4Bharat.
61
+
62
+ ### [Supported Tasks](#supported-tasks)
63
+
64
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
65
+
66
+ ### [Languages](#languages)
67
+
68
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
69
+
70
+ ## [Dataset Structure](#dataset-structure)
71
+
72
+ We show detailed information for up to 5 configurations of the dataset.
73
+
74
+ ### [Data Instances](#data-instances)
75
+
76
+ #### actsa-sc.te
77
+
78
+ - **Size of downloaded dataset files:** 0.36 MB
79
+ - **Size of the generated dataset:** 1.63 MB
80
+ - **Total amount of disk used:** 1.99 MB
81
+
82
+ An example of 'validation' looks as follows.
83
+ ```
84
+ This example was too long and was cropped:
85
+
86
+ {
87
+ "label": 0,
88
+ "text": "\"ప్రయాణాల్లో ఉన్నవారికోసం బస్ స్టేషన్లు, రైల్వే స్టేషన్లలో పల్స్పోలియో బూతులను ఏర్పాటు చేసి చిన్నారులకు పోలియో చుక్కలు వేసేలా ఏర..."
89
+ }
90
+ ```
91
+
92
+ #### bbca.hi
93
+
94
+ - **Size of downloaded dataset files:** 5.50 MB
95
+ - **Size of the generated dataset:** 26.35 MB
96
+ - **Total amount of disk used:** 31.85 MB
97
+
98
+ An example of 'train' looks as follows.
99
+ ```
100
+ This example was too long and was cropped:
101
+
102
+ {
103
+ "label": "pakistan",
104
+ "text": "\"नेटिजन यानि इंटरनेट पर सक्रिय नागरिक अब ट्विटर पर सरकार द्वारा लगाए प्रतिबंधों के समर्थन या विरोध में अपने विचार व्यक्त करते है..."
105
+ }
106
+ ```
107
+
108
+ #### copa.en
109
+
110
+ - **Size of downloaded dataset files:** 0.72 MB
111
+ - **Size of the generated dataset:** 0.11 MB
112
+ - **Total amount of disk used:** 0.83 MB
113
+
114
+ An example of 'validation' looks as follows.
115
+ ```
116
+ {
117
+ "choice1": "I swept the floor in the unoccupied room.",
118
+ "choice2": "I shut off the light in the unoccupied room.",
119
+ "label": 1,
120
+ "premise": "I wanted to conserve energy.",
121
+ "question": "effect"
122
+ }
123
+ ```
124
+
125
+ #### copa.gu
126
+
127
+ - **Size of downloaded dataset files:** 0.72 MB
128
+ - **Size of the generated dataset:** 0.22 MB
129
+ - **Total amount of disk used:** 0.94 MB
130
+
131
+ An example of 'train' looks as follows.
132
+ ```
133
+ This example was too long and was cropped:
134
+
135
+ {
136
+ "choice1": "\"સ્ત્રી જાણતી હતી કે તેનો મિત્ર મુશ્કેલ સમયમાંથી પસાર થઈ રહ્યો છે.\"...",
137
+ "choice2": "\"મહિલાને લાગ્યું કે તેના મિત્રએ તેની દયાળુ લાભ લીધો છે.\"...",
138
+ "label": 0,
139
+ "premise": "મહિલાએ તેના મિત્રની મુશ્કેલ વર્તન સહન કરી.",
140
+ "question": "cause"
141
+ }
142
+ ```
143
+
144
+ #### copa.hi
145
+
146
+ - **Size of downloaded dataset files:** 0.72 MB
147
+ - **Size of the generated dataset:** 0.22 MB
148
+ - **Total amount of disk used:** 0.94 MB
149
+
150
+ An example of 'validation' looks as follows.
151
+ ```
152
+ {
153
+ "choice1": "मैंने उसका प्रस्ताव ठुकरा दिया।",
154
+ "choice2": "उन्होंने मुझे उत्पाद खरीदने के लिए राजी किया।",
155
+ "label": 0,
156
+ "premise": "मैंने सेल्समैन की पिच पर शक किया।",
157
+ "question": "effect"
158
+ }
159
+ ```
160
+
161
+ ### [Data Fields](#data-fields)
162
+
163
+ The data fields are the same among all splits.
164
+
165
+ #### actsa-sc.te
166
+ - `text`: a `string` feature.
167
+ - `label`: a classification label, with possible values including `positive` (0), `negative` (1).
168
+
169
+ #### bbca.hi
170
+ - `label`: a `string` feature.
171
+ - `text`: a `string` feature.
172
+
173
+ #### copa.en
174
+ - `premise`: a `string` feature.
175
+ - `choice1`: a `string` feature.
176
+ - `choice2`: a `string` feature.
177
+ - `question`: a `string` feature.
178
+ - `label`: a `int32` feature.
179
+
180
+ #### copa.gu
181
+ - `premise`: a `string` feature.
182
+ - `choice1`: a `string` feature.
183
+ - `choice2`: a `string` feature.
184
+ - `question`: a `string` feature.
185
+ - `label`: a `int32` feature.
186
+
187
+ #### copa.hi
188
+ - `premise`: a `string` feature.
189
+ - `choice1`: a `string` feature.
190
+ - `choice2`: a `string` feature.
191
+ - `question`: a `string` feature.
192
+ - `label`: a `int32` feature.
193
+
194
+ ### [Data Splits Sample Size](#data-splits-sample-size)
195
+
196
+ #### actsa-sc.te
197
+
198
+ | |train|validation|test|
199
+ |-----------|----:|---------:|---:|
200
+ |actsa-sc.te| 4328| 541| 541|
201
+
202
+ #### bbca.hi
203
+
204
+ | |train|test|
205
+ |-------|----:|---:|
206
+ |bbca.hi| 3467| 866|
207
+
208
+ #### copa.en
209
+
210
+ | |train|validation|test|
211
+ |-------|----:|---------:|---:|
212
+ |copa.en| 400| 100| 500|
213
+
214
+ #### copa.gu
215
+
216
+ | |train|validation|test|
217
+ |-------|----:|---------:|---:|
218
+ |copa.gu| 362| 88| 448|
219
+
220
+ #### copa.hi
221
+
222
+ | |train|validation|test|
223
+ |-------|----:|---------:|---:|
224
+ |copa.hi| 362| 88| 449|
225
+
226
+ ## [Dataset Creation](#dataset-creation)
227
+
228
+ ### [Curation Rationale](#curation-rationale)
229
+
230
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
231
+
232
+ ### [Source Data](#source-data)
233
+
234
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
235
+
236
+ ### [Annotations](#annotations)
237
+
238
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
239
+
240
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
241
+
242
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
243
+
244
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
245
+
246
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
247
+
248
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
249
+
250
+ ### [Discussion of Biases](#discussion-of-biases)
251
+
252
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
253
+
254
+ ### [Other Known Limitations](#other-known-limitations)
255
+
256
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
257
+
258
+ ## [Additional Information](#additional-information)
259
+
260
+ ### [Dataset Curators](#dataset-curators)
261
+
262
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
263
+
264
+ ### [Licensing Information](#licensing-information)
265
+
266
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
267
+
268
+ ### [Citation Information](#citation-information)
269
+
270
+ ```
271
+ @inproceedings{kakwani2020indicnlpsuite,
272
+ title={{IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages}},
273
+ author={Divyanshu Kakwani and Anoop Kunchukuttan and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar},
274
+ year={2020},
275
+ booktitle={Findings of EMNLP},
276
+ }
277
+
278
+ @inproceedings{kakwani2020indicnlpsuite,
279
+ title={{IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages}},
280
+ author={Divyanshu Kakwani and Anoop Kunchukuttan and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar},
281
+ year={2020},
282
+ booktitle={Findings of EMNLP},
283
+ }
284
+ @inproceedings{Levesque2011TheWS,
285
+ title={The Winograd Schema Challenge},
286
+ author={H. Levesque and E. Davis and L. Morgenstern},
287
+ booktitle={KR},
288
+ year={2011}
289
+ }
290
+
291
+ ```
292
+
293
+
294
+ ### Contributions
295
+
296
+ Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset.