pedramyazdipoor commited on
Commit
2f59c7f
1 Parent(s): 884c340

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -32,7 +32,7 @@ weight_decay = 0.01,
32
  ## Performance
33
  Evaluated on the PQuAD Persian test set with the [official PQuAD link](https://huggingface.co/datasets/newsha/PQuAD).
34
  I trained for more than 1 epoch as well, but I get worse results.
35
- Our XLM-Roberta outperforms [our ParsBert on PQuAD](https://huggingface.co/pedramyazdipoor/parsbert_question_answering_PQuAD), but the former is more than 3 times bigger than the latter one; so comparing these two is not fair.
36
  ### Question Answering On Test Set of PQuAD Dataset
37
 
38
  | Metric | Our XLM-Roberta Large| Our ParsBert |
 
32
  ## Performance
33
  Evaluated on the PQuAD Persian test set with the [official PQuAD link](https://huggingface.co/datasets/newsha/PQuAD).
34
  I trained for more than 1 epoch as well, but I get worse results.
35
+ Our XLM-Roberta outperforms [our ParsBert on PQuAD](https://huggingface.co/pedramyazdipoor/parsbert_question_answering_PQuAD), but the former is more than 3 times bigger than the latter one; so comparing these two is not fair.
36
  ### Question Answering On Test Set of PQuAD Dataset
37
 
38
  | Metric | Our XLM-Roberta Large| Our ParsBert |