pedramyazdipoor commited on
Commit
d181bd7
1 Parent(s): fe1a154

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -15,7 +15,6 @@ This model is fine-tuned on PQuAD Train set and is easily ready to use. Too long
15
 
16
  ## Hyperparameters
17
  The reason I set batch_size to 4 is limited GPU memory in Google Colab.
18
- Pre-processing, training and evaluating model took me about 4 hours.
19
  ```
20
  batch_size = 4
21
  n_epochs = 1
@@ -35,14 +34,16 @@ I trained for more than 1 epoch as well, but I get worse results.
35
 
36
  ### Question Answering On Test Set of PQuAD Dataset
37
 
38
- | Metric | Our XLM-Roberta | Our Pars Bert |
39
- |:----------------:|:---------------:|:-------------:|
40
- | Exact Match | 66.56* | 47.44 |
41
- | F1 | 87.31* | 81.96 |
42
 
43
 
44
 
45
  ## How to use
 
 
46
  ```python
47
  from transformers import AutoTokenizer, AutoModelForQuestionAnswering
48
  path = 'pedramyazdipoor/persian_xlm_roberta_large'
 
15
 
16
  ## Hyperparameters
17
  The reason I set batch_size to 4 is limited GPU memory in Google Colab.
 
18
  ```
19
  batch_size = 4
20
  n_epochs = 1
 
34
 
35
  ### Question Answering On Test Set of PQuAD Dataset
36
 
37
+ | Metric | Our XLM-Roberta Large| Our Pars Bert |
38
+ |:----------------:|:--------------------:|:-------------:|
39
+ | Exact Match | 66.56* | 47.44 |
40
+ | F1 | 87.31* | 81.96 |
41
 
42
 
43
 
44
  ## How to use
45
+
46
+ ## Pytorch
47
  ```python
48
  from transformers import AutoTokenizer, AutoModelForQuestionAnswering
49
  path = 'pedramyazdipoor/persian_xlm_roberta_large'