SudiptoPramanik commited on
Commit
d5ed5d4
·
verified ·
1 Parent(s): cc4d2dd

SudiptoPramanik/Mail_RewardModel_RobertaBase

Browse files
README.md CHANGED
@@ -19,10 +19,10 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.0064
23
- - F1: 1.0
24
- - Roc Auc: 1.0
25
- - Accuracy: 1.0
26
 
27
  ## Model description
28
 
@@ -45,24 +45,24 @@ The following hyperparameters were used during training:
45
  - train_batch_size: 16
46
  - eval_batch_size: 16
47
  - seed: 42
48
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
  - lr_scheduler_type: linear
50
  - num_epochs: 5
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
55
- |:-------------:|:-----:|:----:|:---------------:|:---:|:-------:|:--------:|
56
- | No log | 1.0 | 63 | 0.0064 | 1.0 | 1.0 | 1.0 |
57
- | 0.1495 | 2.0 | 126 | 0.0028 | 1.0 | 1.0 | 1.0 |
58
- | 0.1495 | 3.0 | 189 | 0.0020 | 1.0 | 1.0 | 1.0 |
59
- | 0.0035 | 4.0 | 252 | 0.0016 | 1.0 | 1.0 | 1.0 |
60
- | 0.0024 | 5.0 | 315 | 0.0015 | 1.0 | 1.0 | 1.0 |
61
 
62
 
63
  ### Framework versions
64
 
65
  - Transformers 4.47.1
66
- - Pytorch 2.5.1+cu121
67
  - Datasets 3.2.0
68
  - Tokenizers 0.21.0
 
19
 
20
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.1713
23
+ - F1: 0.9670
24
+ - Roc Auc: 0.9670
25
+ - Accuracy: 0.9670
26
 
27
  ## Model description
28
 
 
45
  - train_batch_size: 16
46
  - eval_batch_size: 16
47
  - seed: 42
48
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
  - lr_scheduler_type: linear
50
  - num_epochs: 5
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
55
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
56
+ | No log | 1.0 | 63 | 0.1713 | 0.9670 | 0.9670 | 0.9670 |
57
+ | 0.1703 | 2.0 | 126 | 0.1866 | 0.9670 | 0.9670 | 0.9670 |
58
+ | 0.1703 | 3.0 | 189 | 0.1876 | 0.9670 | 0.9670 | 0.9670 |
59
+ | 0.0284 | 4.0 | 252 | 0.1917 | 0.9670 | 0.9670 | 0.9670 |
60
+ | 0.0283 | 5.0 | 315 | 0.1924 | 0.9670 | 0.9670 | 0.9670 |
61
 
62
 
63
  ### Framework versions
64
 
65
  - Transformers 4.47.1
66
+ - Pytorch 2.5.1+cu124
67
  - Datasets 3.2.0
68
  - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fc34758264abaf139eef1509088ae68e02a6955548e98e9828b8471f41101dab
3
  size 498612824
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0373f678fe1ad89f2f968d08e7d4b4d3da620762e20eb01f38f453812ac5dc5e
3
  size 498612824
runs/Feb05_00-01-15_8db20cfb4ab1/events.out.tfevents.1738713705.8db20cfb4ab1.1023.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fcb9476a828368e106d8b43858876ceb55cdc52008656e5b888dd9175f55e8b
3
+ size 8305
runs/Feb05_00-01-15_8db20cfb4ab1/events.out.tfevents.1738714605.8db20cfb4ab1.1023.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:715506d139b5eb4aa7a3a1ff99f8e1b15cf15637343d3514125e0e2935a1a494
3
+ size 508
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8eb3b3d4ebc6dfb7b336cd28c596b88dfe2556a1f9cb82177494becdf3c31344
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7a6d49997dbbe7638d553bf3988bb5d50f123d7e5396a797b1d05d1a009d041
3
  size 5368