Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,7 @@ For training, we used all Turkish data that was present in the monolingual Turki
|
|
30 |
|
31 |
# Benchmark performance
|
32 |
|
33 |
-
We tested the performance of **XLMR-MaCoCu-tr** on benchmarks of XPOS, UPOS and NER from the [Universal Dependencies](https://universaldependencies.org/) project. We also tested on a
|
34 |
|
35 |
Scores are averages of three runs, except for COPA, for which we use 10 runs. We use the same hyperparameter settings for all models for POS/NER, for COPA we optimized each learning rate on the dev set.
|
36 |
|
|
|
30 |
|
31 |
# Benchmark performance
|
32 |
|
33 |
+
We tested the performance of **XLMR-MaCoCu-tr** on benchmarks of XPOS, UPOS and NER from the [Universal Dependencies](https://universaldependencies.org/) project. We also tested on a Google translated version of the COPA data set (for details see our [Github repo](https://github.com/RikVN/COPA)). We compare performance to the strong multi-lingual models XLMR-base and XLMR-large, but also to the monolingual [BERTurk](https://huggingface.co/dbmdz/bert-base-turkish-cased) model. For details regarding the fine-tuning procedure you can checkout our [Github](https://github.com/macocu/LanguageModels).
|
34 |
|
35 |
Scores are averages of three runs, except for COPA, for which we use 10 runs. We use the same hyperparameter settings for all models for POS/NER, for COPA we optimized each learning rate on the dev set.
|
36 |
|