BayanDuygu commited on
Commit
07900ee
·
verified ·
1 Parent(s): 660c7f8

first version

Browse files
Files changed (1) hide show
  1. README.md +70 -14
README.md CHANGED
@@ -319,6 +319,7 @@ configs:
319
  path: cola/test-*
320
  ---
321
 
 
322
 
323
  # Dataset Card for TrGLUE
324
 
@@ -330,40 +331,95 @@ The inspiration is clearly the original GLUE benchmark.
330
  ### Single Sentence Tasks
331
 
332
 
333
- **TrCOLA**
 
334
 
335
- **TrSST-2**
 
336
 
337
  ### Sentence Pair Tasks
338
 
339
- **TrMRPC**
340
- **TrSTS-B**
341
- **TrQQP**
342
 
343
- **TrMNLI**
344
- **TrQNLI**
345
- **TrRTE**
346
- **TrWNLI**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
347
 
348
 
349
  ## Dataset Curation
350
 
351
  Some of the datasets are translates of original GLUE sets, some of the datasets are compiled by us. TrSST-2 is scraped from Turkish movie review websites, Sinefil and Beyazperde.
352
- TrCOLA is compiled from openly available linguistic books, then curated by the LLM [Snowflake Arctic]() and data company [Co-one]().
353
- For more information please refer to the [TrCOLA's standalone repo]() and the [research paper]().
 
 
354
 
355
  Rest of the datasets are direct translates, all translations were done by the open source LLM Snowflake Arctic.
356
  We translated the datasets, then made a second pass over the data to eliminate hallucinations.
357
 
358
 
359
-
360
-
361
  ## Benchmarking
362
 
363
- We provide benchmarking script at [TrGLUE Github repo](). The script is the same with HF's original benchmarking script, except the success metric for TrSST-2.
 
364
 
365
  We benchmarked BERTurk on all of our datasets:
366
 
 
 
 
 
 
 
 
 
 
 
 
 
367
 
368
  Also we benchmarked a handful of popular LLMs on challenging sets TrCOLA and TrWNLI:
369
 
 
319
  path: cola/test-*
320
  ---
321
 
322
+ # TrGLUE - A Natural Language Understanding Benchmark for Turkish
323
 
324
  # Dataset Card for TrGLUE
325
 
 
331
  ### Single Sentence Tasks
332
 
333
 
334
+ **TrCOLA** The original **C**orpus **o**f **L**inguistic **A**cceptability consists of sentences compiled from English literature textbooks. The task is to determine if the sentences are grammatically correct and acceptable sentences.
335
+ Our corpus is also compiled from Turkish linguistic textbooks and include morphological, syntactic and semantic violations.
336
 
337
+ **TrSST-2** The Stanford Sentiment Treebank is a sentiment analysis dataset includes sentences from movie reviews, annotated by human annotators.
338
+ The task is to predict the sentiment of a given sentence. Our dataset is compiled from movie review websites, both reviews and sentiment ratings are compiled from those websites.
339
 
340
  ### Sentence Pair Tasks
341
 
342
+ **TrMRPC** The Microsoft Research Paraphrase Corpus is a dataset of sentence pairs automatically extracted from online news sources, with human annotations.
343
+ The task is to determine whether the sentences are semantically equivalent. Our dataset is a direct translation of this dataset.
 
344
 
345
+ **TrSTS-B** The Semantic Textual Similarity Benchmark is a semantic similarity dataset. This dataset contains sentence pairs compiled from news headlines, video and image captions.
346
+ Each pair is annotated with a similarity score from 1 to 5. Our dataset is a direct translation of this dataset.
347
+
348
+ **TrQQP** The Quora Question Pairs2 dataset is a collection of question pairs from Quora website.
349
+ The task is to determine whether a pair of questions are semantically equivalent. Our dataset is a direct translation of this dataset.
350
+
351
+ **TrMNLI** The Multi-Genre Natural Language Inference Corpus is a dataset for the textual entailment task. The dataset is crowsourced.
352
+ Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis (contradiction), or neither (neutral).
353
+ The premise sentences are compiled from different sources, including transcribed speech, fiction writings, and more. Our dataset is a direct translation of this dataset.
354
+
355
+ **TrQNLI** The Stanford Question Answering Dataset (SQuAD) is a well-known question-answering dataset consisting of context-question pairs,
356
+ where the context text (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator).
357
+ QNLI is a binary classification dataset version of SQuAD, where the task is to classify the context text includes the answer to the question text.
358
+ Our dataset is a direct translation of this dataset.
359
+
360
+ **TrRTE** The Recognizing Textual Entailment dataset is compiled from a series of annual textual entailment challenges namely RTE1, RTE3 and RTE5.
361
+ The task is again textual entailment. Our dataset is a direct translation of this dataset.
362
+
363
+ **TrWNLI** The Winograd Schema Challenge, introduced by Levesque et al. in 2011, is a type of reading comprehension task.
364
+ In this challenge, a system is tasked with reading a sentence containing a pronoun and determining the correct referent for that pronoun from a set of choices.
365
+ These examples are deliberately designed to outsmart basic statistical methods by relying on contextual cues provided by specific words or phrases within the sentence.
366
+ To transform this challenge into a sentence pair classification task, the creators of the benchmark generate pairs of sentences by replacing the ambiguous pronoun with each potential referent. The objective is to predict whether the sentence remains logically consistent when the pronoun is substituted with one of the choices.
367
+ Our dataset is a direct translation of this dataset.
368
+
369
+
370
+ ## Dataset Statistics
371
+
372
+ The sizes of each dataset are as below:
373
+
374
+
375
+ | Subset | size |
376
+ |---|---|
377
+ | TrCOLA | 9,92K |
378
+ | TrSST-2 | 78K |
379
+ | TrMRPC | 5,23K |
380
+ | TrSTS-B | 7,96K |
381
+ | TrQQP | 249K |
382
+ | TrMNLI | 161K |
383
+ | TrQNLI | 44,3K |
384
+ | TrRTE | 4,65K |
385
+ | TrWNLI | 683 |
386
+
387
+
388
+
389
+ For more information about dataset statistics, please visit the [research paper]().
390
 
391
 
392
  ## Dataset Curation
393
 
394
  Some of the datasets are translates of original GLUE sets, some of the datasets are compiled by us. TrSST-2 is scraped from Turkish movie review websites, Sinefil and Beyazperde.
395
+ TrCOLA is compiled from openly available linguistic books, then generated violation b<y the
396
+ LLM [Snowflake Arctic](https://www.snowflake.com/en/blog/arctic-open-efficient-foundation-language-models-snowflake/) and
397
+ then curated by the data company [Co-one](https://www.co-one.co/).
398
+ For more information please refer to the [TrCOLA's standalone repo](https://huggingface.co/datasets/turkish-nlp-suite/TrCOLA) and the [research paper]().
399
 
400
  Rest of the datasets are direct translates, all translations were done by the open source LLM Snowflake Arctic.
401
  We translated the datasets, then made a second pass over the data to eliminate hallucinations.
402
 
403
 
 
 
404
  ## Benchmarking
405
 
406
+ We provide benchmarking script at [TrGLUE Github repo](https://github.com/turkish-nlp-suite/TrGLUE).
407
+ The script is the same with HF's original benchmarking script, except the success metric for TrSST-2 (original task's metric is binary accuracy, ours is Matthews' correlation coefficient).
408
 
409
  We benchmarked BERTurk on all of our datasets:
410
 
411
+ | Subset | task | metrics | success |
412
+ |---|---|---|---|
413
+ | TrCOLA | acceptability | Matthews corr. | 42 |
414
+ | TrSST-2 | sentiment | Matthews corr. | 67.6 |
415
+ | TrMRPC | paraphrase | acc./F1 | 84.3 |
416
+ | TrSTS-B | sentence similarity | Pearson/Separman corr. | 87.1 |
417
+ | TrQQP | paraphrase | acc./F1 | 86.2 |
418
+ | TrMNLI | NLI | matched/mismatched acc. | 75.4/72.5 |
419
+ | TrQNLI | QA/NLI | acc. | 84.3 |
420
+ | TrRTE | NLI | acc. | 71.2 |
421
+ | TrWNLI | coref/NLI | acc. | 51.6 |
422
+
423
 
424
  Also we benchmarked a handful of popular LLMs on challenging sets TrCOLA and TrWNLI:
425