BayanDuygu commited on
Commit
660c7f8
·
verified ·
1 Parent(s): 95e1e8e

first draft

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md CHANGED
@@ -321,3 +321,54 @@ configs:
321
 
322
 
323
  # Dataset Card for TrGLUE
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
321
 
322
 
323
  # Dataset Card for TrGLUE
324
+
325
+ TrGLUE is a natural language understanding benchmarking dataset including several single sentence and sentence pair classification tasks.
326
+ The inspiration is clearly the original GLUE benchmark.
327
+
328
+ ## Tasks
329
+
330
+ ### Single Sentence Tasks
331
+
332
+
333
+ **TrCOLA**
334
+
335
+ **TrSST-2**
336
+
337
+ ### Sentence Pair Tasks
338
+
339
+ **TrMRPC**
340
+ **TrSTS-B**
341
+ **TrQQP**
342
+
343
+ **TrMNLI**
344
+ **TrQNLI**
345
+ **TrRTE**
346
+ **TrWNLI**
347
+
348
+
349
+ ## Dataset Curation
350
+
351
+ Some of the datasets are translates of original GLUE sets, some of the datasets are compiled by us. TrSST-2 is scraped from Turkish movie review websites, Sinefil and Beyazperde.
352
+ TrCOLA is compiled from openly available linguistic books, then curated by the LLM [Snowflake Arctic]() and data company [Co-one]().
353
+ For more information please refer to the [TrCOLA's standalone repo]() and the [research paper]().
354
+
355
+ Rest of the datasets are direct translates, all translations were done by the open source LLM Snowflake Arctic.
356
+ We translated the datasets, then made a second pass over the data to eliminate hallucinations.
357
+
358
+
359
+
360
+
361
+ ## Benchmarking
362
+
363
+ We provide benchmarking script at [TrGLUE Github repo](). The script is the same with HF's original benchmarking script, except the success metric for TrSST-2.
364
+
365
+ We benchmarked BERTurk on all of our datasets:
366
+
367
+
368
+ Also we benchmarked a handful of popular LLMs on challenging sets TrCOLA and TrWNLI:
369
+
370
+
371
+
372
+ ## Citation
373
+
374
+ Coming soon!