DanJoshua commited on
Commit
b284e97
·
verified ·
1 Parent(s): 4dcba69

Model save

Browse files
Files changed (4) hide show
  1. README.md +78 -0
  2. all_results.json +8 -0
  3. model.safetensors +1 -1
  4. train_results.json +8 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ - f1
8
+ - precision
9
+ - recall
10
+ model-index:
11
+ - name: estudiante_Swin3D_RLVS
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # estudiante_Swin3D_RLVS
19
+
20
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.1367
23
+ - Accuracy: 0.9786
24
+ - F1: 0.9786
25
+ - Precision: 0.9787
26
+ - Recall: 0.9786
27
+ - Roc Auc: 0.9981
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 1e-05
47
+ - train_batch_size: 15
48
+ - eval_batch_size: 15
49
+ - seed: 42
50
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
51
+ - lr_scheduler_type: linear
52
+ - lr_scheduler_warmup_steps: 318
53
+ - training_steps: 3180
54
+ - mixed_precision_training: Native AMP
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc |
59
+ |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:|
60
+ | 0.2317 | 1.0160 | 159 | 0.1561 | 0.9412 | 0.9410 | 0.9453 | 0.9412 | 0.9878 |
61
+ | 0.1354 | 2.0321 | 318 | 0.0908 | 0.9652 | 0.9652 | 0.9654 | 0.9652 | 0.9939 |
62
+ | 0.0698 | 4.0142 | 477 | 0.1050 | 0.9679 | 0.9679 | 0.9688 | 0.9679 | 0.9952 |
63
+ | 0.0767 | 5.0302 | 636 | 0.0930 | 0.9759 | 0.9759 | 0.9761 | 0.9759 | 0.9972 |
64
+ | 0.0576 | 7.0123 | 795 | 0.0916 | 0.9786 | 0.9786 | 0.9787 | 0.9786 | 0.9975 |
65
+ | 0.0514 | 8.0283 | 954 | 0.0840 | 0.9813 | 0.9813 | 0.9814 | 0.9813 | 0.9985 |
66
+ | 0.0481 | 10.0104 | 1113 | 0.1026 | 0.9733 | 0.9733 | 0.9733 | 0.9733 | 0.9980 |
67
+ | 0.0257 | 11.0264 | 1272 | 0.1148 | 0.9813 | 0.9813 | 0.9814 | 0.9813 | 0.9966 |
68
+ | 0.03 | 13.0085 | 1431 | 0.1170 | 0.9759 | 0.9759 | 0.9761 | 0.9759 | 0.9981 |
69
+ | 0.0302 | 14.0245 | 1590 | 0.1537 | 0.9733 | 0.9733 | 0.9735 | 0.9733 | 0.9978 |
70
+ | 0.0414 | 16.0066 | 1749 | 0.1367 | 0.9786 | 0.9786 | 0.9787 | 0.9786 | 0.9981 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.46.3
76
+ - Pytorch 2.0.1+cu118
77
+ - Datasets 3.1.0
78
+ - Tokenizers 0.20.3
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 16.006603773584906,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.0886634129534591,
5
+ "train_runtime": 6948.2294,
6
+ "train_samples_per_second": 6.865,
7
+ "train_steps_per_second": 0.458
8
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ffedcd1f41c0470779649df96f427c6bb9b22414d951cd73be50292828b2425c
3
  size 126481280
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:925e7559ae258fe1632f225142634612bbf5fbe2a1335f336232a649b10cd036
3
  size 126481280
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 16.006603773584906,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.0886634129534591,
5
+ "train_runtime": 6948.2294,
6
+ "train_samples_per_second": 6.865,
7
+ "train_steps_per_second": 0.458
8
+ }