sushant-joshi commited on
Commit
fed08cc
1 Parent(s): 488b3df

revise the readme

Browse files

revise the read me file

Files changed (1) hide show
  1. README.md +1 -9
README.md CHANGED
@@ -12,7 +12,7 @@ probably proofread and complete it, then remove this comment. -->
12
 
13
  # malay-patel/bert-finetuned-squad-nq
14
 
15
- This model is a fine-tuned version of [nlpconnect/roberta-base-squad2-nq](https://huggingface.co/nlpconnect/roberta-base-squad2-nq) on the Natural Questions dataset.
16
 
17
  It achieves the following results on the evaluation set:
18
  - Train Loss: 1.5461
@@ -20,22 +20,15 @@ It achieves the following results on the evaluation set:
20
  - Train Start Logits Accuracy: 0.6120
21
  - Epoch: 2
22
 
23
- ## Model description
24
-
25
- More information needed
26
-
27
  ## Intended uses & limitations
28
 
29
  The intent of publishing this model is to improve the performance of question answering model, with the subjectivity of questions/ answers being the prime objective.
30
- The model has been pretrained on SQuAD 2.0 and finetuned with Natural Questions dataset. It has been validated against SubjQA dataset for reporting the metrics.
31
 
32
  ## Training and evaluation data
33
 
34
  Training Data: SQuAD 2.0 + Natural Questions + SubjQA
35
  Evaluation Data: SubjQA
36
 
37
- ## Training procedure
38
-
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
@@ -50,7 +43,6 @@ The following hyperparameters were used during training:
50
  | 1.5423 | 0.6286 | 0.6192 | 1 |
51
  | 1.5461 | 0.6253 | 0.6120 | 2 |
52
 
53
-
54
  ### Framework versions
55
 
56
  - Transformers 4.24.0
 
12
 
13
  # malay-patel/bert-finetuned-squad-nq
14
 
15
+ This model is a fine-tuned version of [nlpconnect/roberta-base-squad2-nq](https://huggingface.co/nlpconnect/roberta-base-squad2-nq) on the SubjQA dataset.
16
 
17
  It achieves the following results on the evaluation set:
18
  - Train Loss: 1.5461
 
20
  - Train Start Logits Accuracy: 0.6120
21
  - Epoch: 2
22
 
 
 
 
 
23
  ## Intended uses & limitations
24
 
25
  The intent of publishing this model is to improve the performance of question answering model, with the subjectivity of questions/ answers being the prime objective.
 
26
 
27
  ## Training and evaluation data
28
 
29
  Training Data: SQuAD 2.0 + Natural Questions + SubjQA
30
  Evaluation Data: SubjQA
31
 
 
 
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
 
43
  | 1.5423 | 0.6286 | 0.6192 | 1 |
44
  | 1.5461 | 0.6253 | 0.6120 | 2 |
45
 
 
46
  ### Framework versions
47
 
48
  - Transformers 4.24.0