File size: 12,748 Bytes
d9c3439
fad06c5
 
 
 
 
 
 
 
 
3539e37
fad06c5
c386a24
fad06c5
 
 
 
 
 
 
 
 
 
3539e37
fad06c5
742cb2d
fad06c5
 
3539e37
 
 
 
fad06c5
 
 
 
 
3539e37
 
 
 
 
 
 
 
 
 
 
 
6af6134
3539e37
 
6af6134
3539e37
 
6af6134
3539e37
fad06c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28f229d
 
 
 
 
 
fad06c5
 
 
28f229d
 
fad06c5
28f229d
 
fad06c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53e39ee
fad06c5
 
 
 
28f229d
 
fad06c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28f229d
 
fad06c5
 
 
 
 
 
ec931fa
28f229d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e14ac14
 
28f229d
e14ac14
 
28f229d
e14ac14
 
28f229d
742cb2d
49d77db
 
 
742cb2d
76776bc
 
 
742cb2d
 
49d77db
 
76776bc
35fb571
49d77db
76776bc
35fb571
49d77db
76776bc
35fb571
49d77db
28f229d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
742cb2d
49d77db
 
742cb2d
49d77db
742cb2d
49d77db
742cb2d
28f229d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3539e37
 
 
 
 
 
 
 
ec931fa
 
07900ee
ec931fa
18e1d0c
 
 
28f229d
660c7f8
 
 
 
 
 
 
 
 
07900ee
53961e8
 
660c7f8
07900ee
53961e8
 
660c7f8
 
 
07900ee
 
660c7f8
07900ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
660c7f8
 
 
 
 
53961e8
07900ee
 
 
660c7f8
 
 
 
 
 
 
07900ee
 
660c7f8
 
 
07900ee
 
 
 
 
 
 
 
 
 
 
 
660c7f8
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
---
annotations_creators:
- Duygu Altinok
language:
- tr
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- nyu-mll/glue
task_categories:
- text-classification
task_ids:
- acceptability-classification
- natural-language-inference
- semantic-similarity-scoring
- sentiment-classification
- text-scoring
pretty_name: TrGLUE (GLUE for Turkish language)
config_names:
- cola
- mnli
- sst2
- mrpc
- qnli
- qqp
- rte
- stsb
- wnli
tags:
- qa-nli
- coreference-nli
- paraphrase-identification
dataset_info:
- config_name: cola
  features:
  - name: sentence
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': unacceptable
          '1': acceptable
  splits:
  - name: train
    num_bytes: 1025960
    num_examples: 7916
  - name: validation
    num_bytes: 130843
    num_examples: 1000
  - name: test
    num_bytes: 129741
    num_examples: 1000
- config_name: mnli
  features:
  - name: premise
    dtype: string
  - name: hypothesis
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': entailment
          '1': neutral
          '2': contradiction
  splits:
  - name: train
    num_bytes: 23742281
    num_examples: 126351
  - name: validation_matched
    num_bytes: 1551330
    num_examples: 8302
  - name: validation_mismatched
    num_bytes: 1882471
    num_examples: 8161
  - name: test_matched
    num_bytes: 1723631
    num_examples: 8939
  - name: test_mismatched
    num_bytes: 1902838
    num_examples: 9139
  download_size: 160944
- config_name: mrpc
  features:
  - name: sentence1
    dtype: string
  - name: sentence2
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': not_equivalent
          '1': equivalent
  splits:
  - name: train
    num_bytes: 971403
    num_examples: 3210
  - name: validation
    num_bytes: 122471
    num_examples: 406
  - name: test
    num_bytes: 426814
    num_examples: 1591
  download_size: 1572159
- config_name: qnli
  features:
  - name: question
    dtype: string
  - name: sentence
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': entailment
          '1': not_entailment
  splits:
  - name: train
    num_bytes: 10039361
    num_examples: 39981
  - name: validation
    num_bytes: 678829
    num_examples: 2397
  - name: test
    num_bytes: 547379
    num_examples: 1913
  download_size: 19278324
- config_name: qqp
  features:
  - name: question1
    dtype: string
  - name: question2
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': not_duplicate
          '1': duplicate
  splits:
  - name: train
    num_bytes: 22640320
    num_examples: 155767
  - name: validation
    num_bytes: 3795876
    num_examples: 26070
  - name: test
    num_bytes: 11984165
    num_examples: 67471
  download_size: 73982265
- config_name: rte
  features:
  - name: sentence1
    dtype: string
  - name: sentence2
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': entailment
          '1': not_entailment
  splits:
  - name: train
    num_bytes: 723360
    num_examples: 2015
  - name: validation
    num_bytes: 68999
    num_examples: 226
  - name: test
    num_bytes: 777128
    num_examples: 2410
  download_size: 1274409
- config_name: sst2
  features:
  - name: sentence
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': negative
          '1': positive
  splits:
  - name: train
    num_bytes: 5586957
    num_examples: 60411
  - name: validation
    num_bytes: 733500
    num_examples: 8905
  - name: test
    num_bytes: 742661
    num_examples: 8934
  download_size: 58918801
- config_name: stsb
  features:
  - name: sentence1
    dtype: string
  - name: sentence2
    dtype: string
  - name: label
    dtype: float32
  splits:
  - name: train
    num_bytes: 719415
    num_examples: 5254
  - name: validation
    num_bytes: 206991
    num_examples: 1417
  - name: test
    num_bytes: 163808
    num_examples: 1291
  download_size: 766983
- config_name: wnli
  features:
  - name: sentence1
    dtype: string
  - name: sentence2
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': not_entailment
          '1': entailment
  splits:
  - name: train
    num_bytes: 83577
    num_examples: 509
  - name: validation
    num_bytes: 10746
    num_examples: 62
  - name: test
    num_bytes: 27058
    num_examples: 112
  download_size: 63522
configs:
- config_name: mnli
  data_files:
  - split: train
    path: mnli/train-*
  - split: validation_matched
    path: mnli/valid_matched-*
  - split: validation_mismatched
    path: mnli/valid_mismatched-*
  - split: test_matched
    path: mnli/test_matched-*
  - split: test_mismatched
    path: mnli/test_mismatched-*
- config_name: mrpc
  data_files:
  - split: train
    path: mrpc/train-*
  - split: validation
    path: mrpc/validation-*
  - split: test
    path: mrpc/test-*
- config_name: qnli
  data_files:
  - split: train
    path: qnli/train-*
  - split: validation
    path: qnli/validation-*
  - split: test
    path: qnli/test-*
- config_name: qqp
  data_files:
  - split: train
    path: qqp/train-*
  - split: validation
    path: qqp/validation-*
  - split: test
    path: qqp/test-*
- config_name: rte
  data_files:
  - split: train
    path: rte/train-*
  - split: validation
    path: rte/validation-*
  - split: test
    path: rte/test-*
- config_name: sst2
  data_files:
  - split: train
    path: sst2/train-*
  - split: validation
    path: sst2/validation-*
  - split: test
    path: sst2/test-*
- config_name: stsb
  data_files:
  - split: train
    path: stsb/train-*
  - split: validation
    path: stsb/validation-*
  - split: test
    path: stsb/test-*
- config_name: wnli
  data_files:
  - split: train
    path: wnli/train-*
  - split: validation
    path: wnli/validation-*
  - split: test
    path: wnli/test-*
- config_name: cola
  data_files:
  - split: train
    path: cola/train-*
  - split: validation
    path: cola/validation-*
  - split: test
    path: cola/test-*
---

# TrGLUE - A Natural Language Understanding Benchmark for Turkish


<img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/trgluelogo.png"  width="30%" height="30%">

# Dataset Card for TrGLUE

TrGLUE is a natural language understanding benchmarking dataset including several single sentence and sentence pair classification tasks.
The inspiration is clearly the original GLUE benchmark.

## Tasks 

### Single Sentence Tasks


**TrCOLA** The original **C**orpus **o**f **L**inguistic **A**cceptability consists of sentences compiled from English literature textbooks. The task is to determine if the sentences are grammatically correct and acceptable sentences.
Our corpus is also compiled from Turkish linguistic textbooks and include morphological, syntactic and semantic violations. 
This dataset also has a [standalone repo on HuggingFace](https://huggingface.co/datasets/turkish-nlp-suite/TrCOLA).

**TrSST-2** The Stanford Sentiment Treebank is a sentiment analysis dataset includes sentences from movie reviews, annotated by human annotators. 
The task is to predict the sentiment of a given sentence. Our dataset is compiled from movie review websites BeyazPerde.com and Sinefil.com, both reviews and sentiment ratings are compiled from those websites. 
Here we offer a binary classification task to be compatible with the original GLUE task, however we offer a 10-way classification challenge in this dataset's [standalone HuggingFace repo](https://huggingface.co/datasets/turkish-nlp-suite/BuyukSinema).

### Sentence Pair Tasks

**TrMRPC** The Microsoft Research Paraphrase Corpus is a dataset of sentence pairs automatically extracted from online news sources, with human annotations.
The task is to determine whether the sentences are semantically equivalent. Our dataset is a direct translation of this dataset.

**TrSTS-B**  The Semantic Textual Similarity Benchmark is a semantic similarity dataset. This dataset contains sentence pairs compiled from news headlines, video and image captions.
Each pair is annotated with a similarity score from 1 to 5. Our dataset is a direct translation of this dataset.

**TrQQP** The Quora Question Pairs2 dataset is a collection of question pairs from Quora website.
The task is to determine whether a pair of questions are semantically equivalent. Our dataset is a direct translation of this dataset.

**TrMNLI** The Multi-Genre Natural Language Inference Corpus is a dataset for the textual entailment task. The dataset is crowsourced. 
Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis (contradiction), or neither (neutral).
The premise sentences are compiled from different sources, including transcribed speech, fiction writings, and more. Our dataset is a direct translation of this dataset.

**TrQNLI** The Stanford Question Answering Dataset (SQuAD) is a well-known question-answering dataset consisting of context-question pairs, 
where the context text (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator).
QNLI is a binary classification dataset version of SQuAD, where the task is to classify the context text includes the answer to the question text.
Our dataset is a direct translation of this dataset.

**TrRTE** The Recognizing Textual Entailment dataset is compiled from a series of annual textual entailment challenges namely RTE1, RTE3 and RTE5. 
The task is again textual entailment. Our dataset is a direct translation of this dataset.

**TrWNLI** The Winograd Schema Challenge, introduced by Levesque et al. in 2011, is a type of reading comprehension task. 
In this challenge, a system is tasked with reading a sentence containing a pronoun and determining the correct referent for that pronoun from a set of choices. 
These examples are deliberately designed to outsmart basic statistical methods by relying on contextual cues provided by specific words or phrases within the sentence.
To transform this challenge into a sentence pair classification task, the creators of the benchmark generate pairs of sentences by replacing the ambiguous pronoun with each potential referent. The objective is to predict whether the sentence remains logically consistent when the pronoun is substituted with one of the choices.
Our dataset is a direct translation of this dataset.


## Dataset Statistics

The sizes of each dataset are as below:


| Subset | size |
|---|---| 
| TrCOLA  | 9,92K |
| TrSST-2  | 78K |
| TrMRPC | 5,23K |
| TrSTS-B  | 7,96K |
| TrQQP  | 249K |
| TrMNLI  | 161K |
| TrQNLI  | 44,3K |
| TrRTE  | 4,65K |
| TrWNLI  | 683 |



For more information about dataset statistics, please visit the [research paper]().


## Dataset Curation

Some of the datasets are translates of original GLUE sets, some of the datasets are compiled by us. TrSST-2 is scraped from Turkish movie review websites, Sinefil and Beyazperde.
TrCOLA is compiled from openly available linguistic books, then generated violation by the
LLM [Snowflake Arctic](https://www.snowflake.com/en/blog/arctic-open-efficient-foundation-language-models-snowflake/) and 
then curated by the data company [Co-one](https://www.co-one.co/).
For more information please refer to the [TrCOLA's standalone repo](https://huggingface.co/datasets/turkish-nlp-suite/TrCOLA) and the [research paper]().

Rest of the datasets are direct translates, all translations were done by the open source LLM Snowflake Arctic.
We translated the datasets, then made a second pass over the data to eliminate hallucinations.


## Benchmarking

We provide benchmarking script at [TrGLUE Github repo](https://github.com/turkish-nlp-suite/TrGLUE). 
The script is the same with HF's original benchmarking script, except the success metric for TrSST-2 (original task's metric is binary accuracy, ours is Matthews' correlation coefficient).

We benchmarked BERTurk on all of our datasets:

| Subset | task | metrics | success |
|---|---|---|---|
| TrCOLA  | acceptability | Matthews corr. | 42 |
| TrSST-2  | sentiment | Matthews corr. | 67.6 |
| TrMRPC | paraphrase | acc./F1 | 84.3 | 
| TrSTS-B  | sentence similarity | Pearson/Separman corr. | 87.1 |
| TrQQP  | paraphrase | acc./F1 | 86.2 |
| TrMNLI  | NLI | matched/mismatched acc. | 75.4/72.5 |
| TrQNLI  | QA/NLI | acc. | 84.3 |
| TrRTE  | NLI | acc. | 71.2 |
| TrWNLI  | coref/NLI | acc. | 51.6 | 


Also we benchmarked a handful of popular LLMs on challenging sets TrCOLA and TrWNLI:



## Citation

Coming soon!