matthieulel commited on
Commit
e51725e
·
verified ·
1 Parent(s): 8be9bff

Model save

Browse files
Files changed (2) hide show
  1. README.md +39 -41
  2. model.safetensors +1 -1
README.md CHANGED
@@ -2,8 +2,6 @@
2
  license: apache-2.0
3
  base_model: google/vit-large-patch32-224-in21k
4
  tags:
5
- - image-classification
6
- - vision
7
  - generated_from_trainer
8
  metrics:
9
  - accuracy
@@ -20,13 +18,13 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  # vit-large-patch32-224-in21k-finetuned-galaxy10-decals
22
 
23
- This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the matthieulel/galaxy10_decals dataset.
24
  It achieves the following results on the evaluation set:
25
- - Loss: 0.5849
26
- - Accuracy: 0.8236
27
- - Precision: 0.8257
28
- - Recall: 0.8236
29
- - F1: 0.8212
30
 
31
  ## Model description
32
 
@@ -46,11 +44,11 @@ More information needed
46
 
47
  The following hyperparameters were used during training:
48
  - learning_rate: 0.0001
49
- - train_batch_size: 32
50
- - eval_batch_size: 32
51
  - seed: 42
52
  - gradient_accumulation_steps: 4
53
- - total_train_batch_size: 128
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
  - lr_scheduler_type: linear
56
  - lr_scheduler_warmup_ratio: 0.1
@@ -60,36 +58,36 @@ The following hyperparameters were used during training:
60
 
61
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
62
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
63
- | 1.1583 | 0.99 | 124 | 1.0551 | 0.7069 | 0.6559 | 0.7069 | 0.6758 |
64
- | 0.8599 | 2.0 | 249 | 0.7914 | 0.7621 | 0.7717 | 0.7621 | 0.7557 |
65
- | 0.854 | 3.0 | 374 | 0.7115 | 0.7672 | 0.7850 | 0.7672 | 0.7642 |
66
- | 0.7282 | 4.0 | 499 | 0.6807 | 0.7683 | 0.7746 | 0.7683 | 0.7604 |
67
- | 0.6165 | 4.99 | 623 | 0.6208 | 0.8016 | 0.8088 | 0.8016 | 0.8015 |
68
- | 0.5946 | 6.0 | 748 | 0.5850 | 0.8044 | 0.8084 | 0.8044 | 0.8009 |
69
- | 0.6243 | 7.0 | 873 | 0.6090 | 0.7931 | 0.8037 | 0.7931 | 0.7935 |
70
- | 0.5429 | 8.0 | 998 | 0.5830 | 0.8021 | 0.8087 | 0.8021 | 0.8006 |
71
- | 0.558 | 8.99 | 1122 | 0.5725 | 0.8095 | 0.8191 | 0.8095 | 0.8081 |
72
- | 0.457 | 10.0 | 1247 | 0.5702 | 0.8123 | 0.8144 | 0.8123 | 0.8085 |
73
- | 0.4399 | 11.0 | 1372 | 0.5973 | 0.8021 | 0.8013 | 0.8021 | 0.7995 |
74
- | 0.4055 | 12.0 | 1497 | 0.5799 | 0.8157 | 0.8186 | 0.8157 | 0.8122 |
75
- | 0.417 | 12.99 | 1621 | 0.6006 | 0.8061 | 0.8175 | 0.8061 | 0.8066 |
76
- | 0.3843 | 14.0 | 1746 | 0.5849 | 0.8236 | 0.8257 | 0.8236 | 0.8212 |
77
- | 0.371 | 15.0 | 1871 | 0.5711 | 0.8196 | 0.8157 | 0.8196 | 0.8161 |
78
- | 0.3546 | 16.0 | 1996 | 0.6050 | 0.8140 | 0.8171 | 0.8140 | 0.8147 |
79
- | 0.2935 | 16.99 | 2120 | 0.6425 | 0.8106 | 0.8159 | 0.8106 | 0.8091 |
80
- | 0.2505 | 18.0 | 2245 | 0.6569 | 0.8112 | 0.8091 | 0.8112 | 0.8086 |
81
- | 0.3094 | 19.0 | 2370 | 0.6558 | 0.8162 | 0.8137 | 0.8162 | 0.8137 |
82
- | 0.2739 | 20.0 | 2495 | 0.7201 | 0.8067 | 0.8094 | 0.8067 | 0.8025 |
83
- | 0.2224 | 20.99 | 2619 | 0.7227 | 0.8140 | 0.8175 | 0.8140 | 0.8114 |
84
- | 0.2359 | 22.0 | 2744 | 0.6941 | 0.8157 | 0.8142 | 0.8157 | 0.8136 |
85
- | 0.2535 | 23.0 | 2869 | 0.7086 | 0.8157 | 0.8160 | 0.8157 | 0.8123 |
86
- | 0.2047 | 24.0 | 2994 | 0.7185 | 0.8236 | 0.8236 | 0.8236 | 0.8207 |
87
- | 0.2162 | 24.99 | 3118 | 0.7135 | 0.8219 | 0.8200 | 0.8219 | 0.8194 |
88
- | 0.2297 | 26.0 | 3243 | 0.7269 | 0.8213 | 0.8172 | 0.8213 | 0.8179 |
89
- | 0.2048 | 27.0 | 3368 | 0.7392 | 0.8145 | 0.8156 | 0.8145 | 0.8143 |
90
- | 0.2156 | 28.0 | 3493 | 0.7453 | 0.8207 | 0.8182 | 0.8207 | 0.8174 |
91
- | 0.1785 | 28.99 | 3617 | 0.7497 | 0.8168 | 0.8157 | 0.8168 | 0.8145 |
92
- | 0.1785 | 29.82 | 3720 | 0.7429 | 0.8202 | 0.8190 | 0.8202 | 0.8173 |
93
 
94
 
95
  ### Framework versions
 
2
  license: apache-2.0
3
  base_model: google/vit-large-patch32-224-in21k
4
  tags:
 
 
5
  - generated_from_trainer
6
  metrics:
7
  - accuracy
 
18
 
19
  # vit-large-patch32-224-in21k-finetuned-galaxy10-decals
20
 
21
+ This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.5907
24
+ - Accuracy: 0.8360
25
+ - Precision: 0.8348
26
+ - Recall: 0.8360
27
+ - F1: 0.8345
28
 
29
  ## Model description
30
 
 
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 0.0001
47
+ - train_batch_size: 128
48
+ - eval_batch_size: 128
49
  - seed: 42
50
  - gradient_accumulation_steps: 4
51
+ - total_train_batch_size: 512
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
  - lr_scheduler_warmup_ratio: 0.1
 
58
 
59
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
60
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
61
+ | 1.8923 | 0.99 | 31 | 1.6725 | 0.4600 | 0.5537 | 0.4600 | 0.3682 |
62
+ | 1.1787 | 1.98 | 62 | 0.9949 | 0.7339 | 0.7513 | 0.7339 | 0.7095 |
63
+ | 0.9165 | 2.98 | 93 | 0.7946 | 0.7700 | 0.7694 | 0.7700 | 0.7540 |
64
+ | 0.802 | 4.0 | 125 | 0.6747 | 0.7948 | 0.7954 | 0.7948 | 0.7843 |
65
+ | 0.7074 | 4.99 | 156 | 0.6196 | 0.8117 | 0.8139 | 0.8117 | 0.8115 |
66
+ | 0.6424 | 5.98 | 187 | 0.6205 | 0.8021 | 0.8075 | 0.8021 | 0.7961 |
67
+ | 0.6309 | 6.98 | 218 | 0.5760 | 0.8117 | 0.8231 | 0.8117 | 0.8127 |
68
+ | 0.5682 | 8.0 | 250 | 0.5748 | 0.8151 | 0.8196 | 0.8151 | 0.8157 |
69
+ | 0.5981 | 8.99 | 281 | 0.5704 | 0.8213 | 0.8269 | 0.8213 | 0.8158 |
70
+ | 0.547 | 9.98 | 312 | 0.5282 | 0.8377 | 0.8352 | 0.8377 | 0.8345 |
71
+ | 0.5067 | 10.98 | 343 | 0.5281 | 0.8382 | 0.8372 | 0.8382 | 0.8356 |
72
+ | 0.5066 | 12.0 | 375 | 0.5441 | 0.8247 | 0.8286 | 0.8247 | 0.8219 |
73
+ | 0.4919 | 12.99 | 406 | 0.5580 | 0.8157 | 0.8236 | 0.8157 | 0.8155 |
74
+ | 0.4508 | 13.98 | 437 | 0.5269 | 0.8303 | 0.8331 | 0.8303 | 0.8279 |
75
+ | 0.4415 | 14.98 | 468 | 0.5399 | 0.8185 | 0.8249 | 0.8185 | 0.8203 |
76
+ | 0.4178 | 16.0 | 500 | 0.5229 | 0.8320 | 0.8358 | 0.8320 | 0.8301 |
77
+ | 0.366 | 16.99 | 531 | 0.5427 | 0.8275 | 0.8281 | 0.8275 | 0.8241 |
78
+ | 0.3706 | 17.98 | 562 | 0.5389 | 0.8241 | 0.8242 | 0.8241 | 0.8230 |
79
+ | 0.3609 | 18.98 | 593 | 0.5573 | 0.8247 | 0.8262 | 0.8247 | 0.8239 |
80
+ | 0.3443 | 20.0 | 625 | 0.5605 | 0.8320 | 0.8325 | 0.8320 | 0.8302 |
81
+ | 0.3214 | 20.99 | 656 | 0.5667 | 0.8281 | 0.8295 | 0.8281 | 0.8254 |
82
+ | 0.3262 | 21.98 | 687 | 0.5797 | 0.8236 | 0.8237 | 0.8236 | 0.8214 |
83
+ | 0.299 | 22.98 | 718 | 0.5938 | 0.8202 | 0.8225 | 0.8202 | 0.8195 |
84
+ | 0.2792 | 24.0 | 750 | 0.5909 | 0.8275 | 0.8258 | 0.8275 | 0.8251 |
85
+ | 0.2969 | 24.99 | 781 | 0.5658 | 0.8309 | 0.8319 | 0.8309 | 0.8306 |
86
+ | 0.2559 | 25.98 | 812 | 0.5936 | 0.8309 | 0.8294 | 0.8309 | 0.8294 |
87
+ | 0.2756 | 26.98 | 843 | 0.5898 | 0.8292 | 0.8295 | 0.8292 | 0.8287 |
88
+ | 0.254 | 28.0 | 875 | 0.6043 | 0.8303 | 0.8319 | 0.8303 | 0.8289 |
89
+ | 0.2674 | 28.99 | 906 | 0.5950 | 0.8371 | 0.8365 | 0.8371 | 0.8353 |
90
+ | 0.2432 | 29.76 | 930 | 0.5907 | 0.8360 | 0.8348 | 0.8360 | 0.8345 |
91
 
92
 
93
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c1e3ecd2677ca78df471e584cbe9618f51e9ccddaf2d5905a56afa9b3c061915
3
  size 1222129168
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af10e5b5f2fd9e086f71fe58b129096ce6e68615206be0dffd102790e0dec494
3
  size 1222129168