sawthiha commited on
Commit
242cefc
·
1 Parent(s): 136af91

End of training

Browse files
Files changed (4) hide show
  1. README.md +11 -65
  2. config.json +1 -1
  3. model.safetensors +1 -1
  4. training_args.bin +1 -1
README.md CHANGED
@@ -8,15 +8,6 @@ tags:
8
  model-index:
9
  - name: segformer-b0-finetuned-deprem-satellite
10
  results: []
11
- widget:
12
- - src: >-
13
- https://datasets-server.huggingface.co/assets/deprem-ml/deprem_satellite_semantic_whu_dataset/--/default/train/9/image/image.jpg
14
- example_title: Example 1
15
- - src: >-
16
- https://datasets-server.huggingface.co/assets/deprem-ml/deprem_satellite_semantic_whu_dataset/--/default/train/3/image/image.jpg
17
- example_title: Example 2
18
- datasets:
19
- - deprem-ml/deprem_satellite_semantic_whu_dataset
20
  ---
21
 
22
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -26,7 +17,15 @@ should probably proofread and complete it, then remove this comment. -->
26
 
27
  This model is a fine-tuned version of [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) on the deprem-ml/deprem_satellite_semantic_whu_dataset dataset.
28
  It achieves the following results on the evaluation set:
29
- - Loss: 0.0685
 
 
 
 
 
 
 
 
30
 
31
  ## Model description
32
 
@@ -51,64 +50,11 @@ The following hyperparameters were used during training:
51
  - seed: 42
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
- - num_epochs: 2
55
-
56
- ### Training results
57
-
58
- | Training Loss | Epoch | Step | Validation Loss |
59
- |:-------------:|:-----:|:----:|:---------------:|
60
- | 0.0956 | 0.04 | 20 | 0.0717 |
61
- | 0.1669 | 0.08 | 40 | 0.0708 |
62
- | 0.073 | 0.13 | 60 | 0.0722 |
63
- | 0.1258 | 0.17 | 80 | 0.0715 |
64
- | 0.1167 | 0.21 | 100 | 0.0719 |
65
- | 0.1157 | 0.25 | 120 | 0.0709 |
66
- | 0.1373 | 0.3 | 140 | 0.0709 |
67
- | 0.0749 | 0.34 | 160 | 0.0707 |
68
- | 0.1033 | 0.38 | 180 | 0.0701 |
69
- | 0.1277 | 0.42 | 200 | 0.0702 |
70
- | 0.0979 | 0.46 | 220 | 0.0703 |
71
- | 0.0959 | 0.51 | 240 | 0.0698 |
72
- | 0.1111 | 0.55 | 260 | 0.0700 |
73
- | 0.1389 | 0.59 | 280 | 0.0695 |
74
- | 0.1247 | 0.63 | 300 | 0.0697 |
75
- | 0.1385 | 0.68 | 320 | 0.0694 |
76
- | 0.083 | 0.72 | 340 | 0.0694 |
77
- | 0.1398 | 0.76 | 360 | 0.0694 |
78
- | 0.1268 | 0.8 | 380 | 0.0694 |
79
- | 0.1256 | 0.84 | 400 | 0.0692 |
80
- | 0.0801 | 0.89 | 420 | 0.0693 |
81
- | 0.1508 | 0.93 | 440 | 0.0691 |
82
- | 0.1229 | 0.97 | 460 | 0.0692 |
83
- | 0.0825 | 1.01 | 480 | 0.0693 |
84
- | 0.1465 | 1.05 | 500 | 0.0692 |
85
- | 0.1086 | 1.1 | 520 | 0.0693 |
86
- | 0.1679 | 1.14 | 540 | 0.0692 |
87
- | 0.138 | 1.18 | 560 | 0.0693 |
88
- | 0.1356 | 1.22 | 580 | 0.0689 |
89
- | 0.0822 | 1.27 | 600 | 0.0690 |
90
- | 0.1235 | 1.31 | 620 | 0.0689 |
91
- | 0.0983 | 1.35 | 640 | 0.0688 |
92
- | 0.1063 | 1.39 | 660 | 0.0689 |
93
- | 0.111 | 1.43 | 680 | 0.0689 |
94
- | 0.149 | 1.48 | 700 | 0.0692 |
95
- | 0.0952 | 1.52 | 720 | 0.0688 |
96
- | 0.1263 | 1.56 | 740 | 0.0687 |
97
- | 0.1124 | 1.6 | 760 | 0.0686 |
98
- | 0.1366 | 1.65 | 780 | 0.0688 |
99
- | 0.1222 | 1.69 | 800 | 0.0688 |
100
- | 0.1499 | 1.73 | 820 | 0.0686 |
101
- | 0.1285 | 1.77 | 840 | 0.0686 |
102
- | 0.1176 | 1.81 | 860 | 0.0687 |
103
- | 0.1234 | 1.86 | 880 | 0.0685 |
104
- | 0.0878 | 1.9 | 900 | 0.0685 |
105
- | 0.1267 | 1.94 | 920 | 0.0685 |
106
- | 0.1274 | 1.98 | 940 | 0.0685 |
107
-
108
 
109
  ### Framework versions
110
 
111
  - Transformers 4.36.2
112
  - Pytorch 2.1.2
113
  - Datasets 2.16.1
114
- - Tokenizers 0.15.0
 
8
  model-index:
9
  - name: segformer-b0-finetuned-deprem-satellite
10
  results: []
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  This model is a fine-tuned version of [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) on the deprem-ml/deprem_satellite_semantic_whu_dataset dataset.
19
  It achieves the following results on the evaluation set:
20
+ - eval_loss: 0.0641
21
+ - eval_mean_iou: 0.9849
22
+ - eval_mean_accuracy: 0.9933
23
+ - eval_overall_accuracy: 0.9933
24
+ - eval_runtime: 94.2835
25
+ - eval_samples_per_second: 10.988
26
+ - eval_steps_per_second: 2.206
27
+ - epoch: 4.18
28
+ - step: 1980
29
 
30
  ## Model description
31
 
 
50
  - seed: 42
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
+ - num_epochs: 5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  ### Framework versions
56
 
57
  - Transformers 4.36.2
58
  - Pytorch 2.1.2
59
  - Datasets 2.16.1
60
+ - Tokenizers 0.15.0
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "segformer-b0-finetuned-deprem-satellite/checkpoint-3920",
3
  "architectures": [
4
  "SegformerForSemanticSegmentation"
5
  ],
 
1
  {
2
+ "_name_or_path": "segformer-b0-finetuned-deprem-satellite/",
3
  "architectures": [
4
  "SegformerForSemanticSegmentation"
5
  ],
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4111c72a1cf2b7a62cd778808e094ea8c71f34f7c204ee5046d25755fbde3b1b
3
  size 14884776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de15f482f117b91dd87b34216d9f22d4c7c09eb716c25d49c7c25fc0f71a5a02
3
  size 14884776
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5eb25bf7f25ff7446bb94eeff02b4cee799b103daee45f50a3a27494e34745d3
3
  size 4728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:101e50b6dc8fd7ffea7d27cceaba0ad8c2b1b10161afa2d7221538cfddc6d5b0
3
  size 4728