fyaronskiy commited on
Commit
196d04d
·
verified ·
1 Parent(s): 2a4bf07

End of training

Browse files
Files changed (2) hide show
  1. README.md +75 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: deepvk/deberta-v1-base
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ - precision
11
+ - recall
12
+ model-index:
13
+ - name: deepvk_deberta-v1-base__bs32_max_len128_ep10_lr5e-05_lr_sheduler_linear
14
+ results: []
15
+ ---
16
+
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
+
20
+ # deepvk_deberta-v1-base__bs32_max_len128_ep10_lr5e-05_lr_sheduler_linear
21
+
22
+ This model is a fine-tuned version of [deepvk/deberta-v1-base](https://huggingface.co/deepvk/deberta-v1-base) on the None dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 0.1221
25
+ - Model Preparation Time: 0.003
26
+ - Accuracy: 0.9631
27
+ - F1: 0.5409
28
+ - Precision: 0.5667
29
+ - Recall: 0.5172
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 5e-05
49
+ - train_batch_size: 32
50
+ - eval_batch_size: 32
51
+ - seed: 28
52
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
53
+ - lr_scheduler_type: linear
54
+ - lr_scheduler_warmup_ratio: 0.05
55
+ - num_epochs: 10
56
+
57
+ ### Training results
58
+
59
+ | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy | F1 | Precision | Recall |
60
+ |:-------------:|:-----:|:----:|:---------------:|:----------------------:|:--------:|:------:|:---------:|:------:|
61
+ | 0.0992 | 1.0 | 1357 | 0.0963 | 0.003 | 0.9669 | 0.4686 | 0.7171 | 0.3480 |
62
+ | 0.0886 | 2.0 | 2714 | 0.0884 | 0.003 | 0.9679 | 0.5236 | 0.6958 | 0.4197 |
63
+ | 0.0817 | 3.0 | 4071 | 0.0888 | 0.003 | 0.9678 | 0.5447 | 0.6710 | 0.4585 |
64
+ | 0.0725 | 4.0 | 5428 | 0.0927 | 0.003 | 0.9657 | 0.5345 | 0.6219 | 0.4687 |
65
+ | 0.0619 | 5.0 | 6785 | 0.0978 | 0.003 | 0.9650 | 0.5390 | 0.6026 | 0.4876 |
66
+ | 0.0502 | 6.0 | 8142 | 0.1075 | 0.003 | 0.9643 | 0.5391 | 0.5886 | 0.4973 |
67
+ | 0.0407 | 7.0 | 9499 | 0.1221 | 0.003 | 0.9631 | 0.5409 | 0.5667 | 0.5172 |
68
+
69
+
70
+ ### Framework versions
71
+
72
+ - Transformers 4.47.1
73
+ - Pytorch 2.5.1+cu121
74
+ - Datasets 3.2.0
75
+ - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c126ec5e5d43de9c572f78951bb7db51c6ac69ab3da956e0b67650de7b2cf8b0
3
  size 498642128
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd316ea027674ccb8161a4f1344532071ef435006a95305fa855855b965b4964
3
  size 498642128