RVN commited on
Commit
5fac5f4
1 Parent(s): f7b2751

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -13
README.md CHANGED
@@ -31,19 +31,19 @@ For training, we used all Maltese data that was present in the [MaCoCu](https://
31
 
32
  # Benchmark performance
33
 
34
- We tested the performance of MaltBERTa on the UPOS and XPOS benchmark of the [Universal Dependencies](https://universaldependencies.org/) project. We compare performance to the strong multi-lingual models XLMR-base and XLMR-large, though note that Maltese was not one of the training languages for those models. We also compare to the recently introduced Maltese language models [BERTu](https://huggingface.co/MLRS/BERTu), [mBERTu](https://huggingface.co/MLRS/mBERTu) and our own [MaltBERTa](https://huggingface.co/RVN/MaltBERTa). For details regarding the fine-tuning procedure you can checkout our [Github](https://github.com/macocu/LanguageModels).
35
-
36
- Scores are averages of three runs. We use the same hyperparameter settings for all models.
37
-
38
- | | **UPOS** | **UPOS** | **XPOS** | **XPOS** |
39
- |-----------------|:--------:|:--------:|:--------:|:--------:|
40
- | | **Dev** | **Test** | **Dev** | **Test** |
41
- | **XLM-R-base** | 93.6 | 93.2 | 93.4 | 93.2 |
42
- | **XLM-R-large** | 94.9 | 94.4 | 95.1 | 94.7 |
43
- | **BERTu** | 97.5 | 97.6 | 95.7 | 95.8 |
44
- | **mBERTu** | **97.7** | 97.8 | 97.9 | 98.1 |
45
- | **MaltBERTa** | 95.7 | 95.8 | 96.1 | 96.0 |
46
- | **XLMR-MaltBERTa** | **97.7** | **98.1** | **98.1** | **98.2** |
47
 
48
  # Acknowledgements
49
 
 
31
 
32
  # Benchmark performance
33
 
34
+ We tested the performance of MaltBERTa on the UPOS and XPOS benchmark of the [Universal Dependencies](https://universaldependencies.org/) project. Moreover, we test on a Google Translated version of the COPA data set (see our [Github repo](https://github.com/RikVN/COPA) for details). We compare performance to the strong multi-lingual models XLMR-base and XLMR-large, though note that Maltese was not one of the training languages for those models. We also compare to the recently introduced Maltese language models [BERTu](https://huggingface.co/MLRS/BERTu), [mBERTu](https://huggingface.co/MLRS/mBERTu) and our own [MaltBERTa](https://huggingface.co/RVN/MaltBERTa). For details regarding the fine-tuning procedure you can checkout our [Github](https://github.com/macocu/LanguageModels).
35
+
36
+ Scores are averages of three runs for UPOS/XPOS and 10 runs for COPA. We use the same hyperparameter settings for all models for UPOS/XPOS, while for COPA we optimize on the dev set.
37
+
38
+ | | **UPOS** | **UPOS** | **XPOS** | **XPOS** | **COPA** |
39
+ |-----------------|:--------:|:--------:|:--------:|:--------:| :--------:|
40
+ | | **Dev** | **Test** | **Dev** | **Test** | **Test** |
41
+ | **XLM-R-base** | 93.6 | 93.2 | 93.4 | 93.2 | 52.2 |
42
+ | **XLM-R-large** | 94.9 | 94.4 | 95.1 | 94.7 | 54.0 |
43
+ | **BERTu** | 97.5 | 97.6 | 95.7 | 95.8 | **55.6** |
44
+ | **mBERTu** | **97.7** | 97.8 | 97.9 | 98.1 52.6 |
45
+ | **MaltBERTa** | 95.7 | 95.8 | 96.1 | 96.0 | 53.7 |
46
+ | **XLMR-MaltBERTa** | **97.7** | **98.1** | **98.1** | **98.2** | 54.4 |
47
 
48
  # Acknowledgements
49