groderg commited on
Commit
5a27ea2
1 Parent(s): a88c31b

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +143 -69
README.md CHANGED
@@ -1,92 +1,166 @@
 
1
  ---
2
- library_name: transformers
3
- license: apache-2.0
4
- base_model: microsoft/resnet-50
5
  tags:
 
 
6
  - generated_from_trainer
7
- metrics:
8
- - accuracy
9
  model-index:
10
  - name: Resneteau-50-2024_09_23-batch-size32_freeze
11
  results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # Resneteau-50-2024_09_23-batch-size32_freeze
18
-
19
- This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the None dataset.
20
- It achieves the following results on the evaluation set:
21
  - Loss: 0.1906
22
  - F1 Micro: 0.6954
23
  - F1 Macro: 0.4462
24
  - Accuracy: 0.1827
25
- - Learning Rate: 0.0001
26
 
27
- ## Model description
 
 
 
28
 
29
- More information needed
30
 
31
- ## Intended uses & limitations
32
 
33
- More information needed
 
 
 
 
 
34
 
35
- ## Training and evaluation data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
- More information needed
38
 
39
- ## Training procedure
40
 
41
- ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
- - learning_rate: 0.001
45
- - train_batch_size: 32
46
- - eval_batch_size: 32
47
- - seed: 42
48
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
- - lr_scheduler_type: linear
50
- - num_epochs: 400
51
- - mixed_precision_training: Native AMP
52
-
53
- ### Training results
54
-
55
- | Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Accuracy | Rate |
56
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:------:|
57
- | No log | 1.0 | 273 | 0.2460 | 0.5802 | 0.2267 | 0.0877 | 0.001 |
58
- | 0.2786 | 2.0 | 546 | 0.2217 | 0.6412 | 0.3160 | 0.1369 | 0.001 |
59
- | 0.2786 | 3.0 | 819 | 0.2117 | 0.6596 | 0.3581 | 0.1486 | 0.001 |
60
- | 0.231 | 4.0 | 1092 | 0.2049 | 0.6674 | 0.3831 | 0.1618 | 0.001 |
61
- | 0.231 | 5.0 | 1365 | 0.2016 | 0.6707 | 0.3965 | 0.1677 | 0.001 |
62
- | 0.2206 | 6.0 | 1638 | 0.2002 | 0.6720 | 0.4076 | 0.1677 | 0.001 |
63
- | 0.2206 | 7.0 | 1911 | 0.1976 | 0.6752 | 0.4142 | 0.1746 | 0.001 |
64
- | 0.2157 | 8.0 | 2184 | 0.1971 | 0.6824 | 0.4281 | 0.1764 | 0.001 |
65
- | 0.2157 | 9.0 | 2457 | 0.1961 | 0.6845 | 0.4300 | 0.1764 | 0.001 |
66
- | 0.2127 | 10.0 | 2730 | 0.1944 | 0.6763 | 0.4264 | 0.1805 | 0.001 |
67
- | 0.2117 | 11.0 | 3003 | 0.1940 | 0.6902 | 0.4391 | 0.1781 | 0.001 |
68
- | 0.2117 | 12.0 | 3276 | 0.1945 | 0.6939 | 0.4523 | 0.1729 | 0.001 |
69
- | 0.2107 | 13.0 | 3549 | 0.1936 | 0.6908 | 0.4461 | 0.1795 | 0.001 |
70
- | 0.2107 | 14.0 | 3822 | 0.1931 | 0.6916 | 0.4424 | 0.1781 | 0.001 |
71
- | 0.2105 | 15.0 | 4095 | 0.1935 | 0.6936 | 0.4431 | 0.1809 | 0.001 |
72
- | 0.2105 | 16.0 | 4368 | 0.1931 | 0.6896 | 0.4429 | 0.1805 | 0.001 |
73
- | 0.2086 | 17.0 | 4641 | 0.1931 | 0.6953 | 0.4411 | 0.1819 | 0.001 |
74
- | 0.2086 | 18.0 | 4914 | 0.1908 | 0.6984 | 0.4490 | 0.1857 | 0.001 |
75
- | 0.2101 | 19.0 | 5187 | 0.1925 | 0.6879 | 0.4428 | 0.1812 | 0.001 |
76
- | 0.2101 | 20.0 | 5460 | 0.1913 | 0.6797 | 0.4357 | 0.1774 | 0.001 |
77
- | 0.2088 | 21.0 | 5733 | 0.1915 | 0.6958 | 0.4381 | 0.1823 | 0.001 |
78
- | 0.2084 | 22.0 | 6006 | 0.1919 | 0.7039 | 0.4535 | 0.1826 | 0.001 |
79
- | 0.2084 | 23.0 | 6279 | 0.1926 | 0.6907 | 0.4363 | 0.1798 | 0.001 |
80
- | 0.2083 | 24.0 | 6552 | 0.1919 | 0.6953 | 0.4544 | 0.1805 | 0.001 |
81
- | 0.2083 | 25.0 | 6825 | 0.1919 | 0.6962 | 0.4466 | 0.1781 | 0.0001 |
82
- | 0.2076 | 26.0 | 7098 | 0.1912 | 0.6943 | 0.4418 | 0.1823 | 0.0001 |
83
- | 0.2076 | 27.0 | 7371 | 0.1912 | 0.6972 | 0.4500 | 0.1809 | 0.0001 |
84
- | 0.2081 | 28.0 | 7644 | 0.1915 | 0.6944 | 0.4454 | 0.1857 | 0.0001 |
85
-
86
-
87
- ### Framework versions
88
-
89
- - Transformers 4.44.2
90
- - Pytorch 2.4.1+cu121
91
- - Datasets 3.0.0
92
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
  ---
3
+ language:
4
+ - eng
5
+ license: wtfpl
6
  tags:
7
+ - multilabel-image-classification
8
+ - multilabel
9
  - generated_from_trainer
10
+ base_model: microsoft/resnet-50
 
11
  model-index:
12
  - name: Resneteau-50-2024_09_23-batch-size32_freeze
13
  results: []
14
  ---
15
 
16
+ Resneteau is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50). It achieves the following results on the test set:
 
17
 
 
 
 
 
18
  - Loss: 0.1906
19
  - F1 Micro: 0.6954
20
  - F1 Macro: 0.4462
21
  - Accuracy: 0.1827
 
22
 
23
+ ---
24
+
25
+ # Model description
26
+ Resneteau is a model built on top of microsoft/resnet-50 model for underwater multilabel image classification.The classification head is a combination of linear, ReLU, batch normalization, and dropout layers.
27
 
28
+ The source code for training the model can be found in this [Git repository](https://github.com/SeatizenDOI/DinoVdeau).
29
 
30
+ - **Developed by:** [lombardata](https://huggingface.co/lombardata), credits to [César Leblanc](https://huggingface.co/CesarLeblanc) and [Victor Illien](https://huggingface.co/groderg)
31
 
32
+ ---
33
+
34
+ # Intended uses & limitations
35
+ You can use the raw model for classify diverse marine species, encompassing coral morphotypes classes taken from the Global Coral Reef Monitoring Network (GCRMN), habitats classes and seagrass species.
36
+
37
+ ---
38
 
39
+ # Training and evaluation data
40
+ Details on the number of images for each class are given in the following table:
41
+ | Class | train | val | test | Total |
42
+ |:-------------------------|--------:|------:|-------:|--------:|
43
+ | Acropore_branched | 1469 | 464 | 475 | 2408 |
44
+ | Acropore_digitised | 568 | 160 | 160 | 888 |
45
+ | Acropore_sub_massive | 150 | 50 | 43 | 243 |
46
+ | Acropore_tabular | 999 | 297 | 293 | 1589 |
47
+ | Algae_assembly | 2546 | 847 | 845 | 4238 |
48
+ | Algae_drawn_up | 367 | 126 | 127 | 620 |
49
+ | Algae_limestone | 1652 | 557 | 563 | 2772 |
50
+ | Algae_sodding | 3148 | 984 | 985 | 5117 |
51
+ | Atra/Leucospilota | 1084 | 348 | 360 | 1792 |
52
+ | Bleached_coral | 219 | 71 | 70 | 360 |
53
+ | Blurred | 191 | 67 | 62 | 320 |
54
+ | Dead_coral | 1979 | 642 | 643 | 3264 |
55
+ | Fish | 2018 | 656 | 647 | 3321 |
56
+ | Homo_sapiens | 161 | 62 | 59 | 282 |
57
+ | Human_object | 157 | 58 | 55 | 270 |
58
+ | Living_coral | 406 | 154 | 141 | 701 |
59
+ | Millepore | 385 | 127 | 125 | 637 |
60
+ | No_acropore_encrusting | 441 | 130 | 154 | 725 |
61
+ | No_acropore_foliaceous | 204 | 36 | 46 | 286 |
62
+ | No_acropore_massive | 1031 | 336 | 338 | 1705 |
63
+ | No_acropore_solitary | 202 | 53 | 48 | 303 |
64
+ | No_acropore_sub_massive | 1401 | 433 | 422 | 2256 |
65
+ | Rock | 4489 | 1495 | 1473 | 7457 |
66
+ | Rubble | 3092 | 1030 | 1001 | 5123 |
67
+ | Sand | 5842 | 1939 | 1938 | 9719 |
68
+ | Sea_cucumber | 1408 | 439 | 447 | 2294 |
69
+ | Sea_urchins | 327 | 107 | 111 | 545 |
70
+ | Sponge | 269 | 96 | 105 | 470 |
71
+ | Syringodium_isoetifolium | 1212 | 392 | 391 | 1995 |
72
+ | Thalassodendron_ciliatum | 782 | 261 | 260 | 1303 |
73
+ | Useless | 579 | 193 | 193 | 965 |
74
 
75
+ ---
76
 
77
+ # Training procedure
78
 
79
+ ## Training hyperparameters
80
 
81
  The following hyperparameters were used during training:
82
+
83
+ - **Number of Epochs**: 28.0
84
+ - **Learning Rate**: 0.001
85
+ - **Train Batch Size**: 32
86
+ - **Eval Batch Size**: 32
87
+ - **Optimizer**: Adam
88
+ - **LR Scheduler Type**: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1
89
+ - **Freeze Encoder**: Yes
90
+ - **Data Augmentation**: Yes
91
+
92
+
93
+ ## Data Augmentation
94
+ Data were augmented using the following transformations :
95
+
96
+ Train Transforms
97
+ - **PreProcess**: No additional parameters
98
+ - **Resize**: probability=1.00
99
+ - **RandomHorizontalFlip**: probability=0.25
100
+ - **RandomVerticalFlip**: probability=0.25
101
+ - **ColorJiggle**: probability=0.25
102
+ - **RandomPerspective**: probability=0.25
103
+ - **Normalize**: probability=1.00
104
+
105
+ Val Transforms
106
+ - **PreProcess**: No additional parameters
107
+ - **Resize**: probability=1.00
108
+ - **Normalize**: probability=1.00
109
+
110
+
111
+
112
+ ## Training results
113
+ Epoch | Validation Loss | Accuracy | F1 Macro | F1 Micro | Learning Rate
114
+ --- | --- | --- | --- | --- | ---
115
+ 1 | 0.24598382413387299 | 0.08766458766458766 | 0.5801698557249565 | 0.226738844317642 | 0.001
116
+ 2 | 0.22168199717998505 | 0.13686763686763687 | 0.6411905904944791 | 0.3160165508599939 | 0.001
117
+ 3 | 0.21166761219501495 | 0.14864864864864866 | 0.6595584072466503 | 0.3580673052862397 | 0.001
118
+ 4 | 0.20492619276046753 | 0.16181566181566182 | 0.6673936750272628 | 0.3831121485565155 | 0.001
119
+ 5 | 0.20162147283554077 | 0.1677061677061677 | 0.6707461695365495 | 0.3964602797407069 | 0.001
120
+ 6 | 0.20019273459911346 | 0.1677061677061677 | 0.6719734660033168 | 0.40758628553731013 | 0.001
121
+ 7 | 0.19761690497398376 | 0.17463617463617465 | 0.6751762240426747 | 0.4142080471846538 | 0.001
122
+ 8 | 0.19706940650939941 | 0.17636867636867637 | 0.6823529411764706 | 0.42809095916498113 | 0.001
123
+ 9 | 0.19613835215568542 | 0.17636867636867637 | 0.6844589857443328 | 0.43000179684162393 | 0.001
124
+ 10 | 0.19443827867507935 | 0.18052668052668053 | 0.676261056657901 | 0.4264062108185488 | 0.001
125
+ 11 | 0.19399969279766083 | 0.1781011781011781 | 0.6902341199514971 | 0.43914447135579204 | 0.001
126
+ 12 | 0.19451384246349335 | 0.1729036729036729 | 0.6938511326860841 | 0.45234247782022446 | 0.001
127
+ 13 | 0.19363747537136078 | 0.1794871794871795 | 0.6907971453892439 | 0.44605482120784584 | 0.001
128
+ 14 | 0.1931454837322235 | 0.1781011781011781 | 0.6916442548455903 | 0.44244925103284655 | 0.001
129
+ 15 | 0.1935158371925354 | 0.18087318087318088 | 0.6936180088187515 | 0.44307178033824657 | 0.001
130
+ 16 | 0.19309590756893158 | 0.18052668052668053 | 0.6895936942854461 | 0.4428841041517678 | 0.001
131
+ 17 | 0.19311168789863586 | 0.18191268191268192 | 0.6953186376449928 | 0.4411042424961882 | 0.001
132
+ 18 | 0.19081147015094757 | 0.18572418572418573 | 0.6983818770226538 | 0.4490480976278912 | 0.001
133
+ 19 | 0.19249168038368225 | 0.1812196812196812 | 0.6878854936673101 | 0.4428453523216445 | 0.001
134
+ 20 | 0.19134406745433807 | 0.1774081774081774 | 0.6796580216840999 | 0.43568338344914237 | 0.001
135
+ 21 | 0.19149190187454224 | 0.18225918225918225 | 0.6957772621809745 | 0.4381469652060519 | 0.001
136
+ 22 | 0.19192616641521454 | 0.1826056826056826 | 0.7038712011577424 | 0.4534807464842353 | 0.001
137
+ 23 | 0.19255639612674713 | 0.17983367983367984 | 0.6907461850762985 | 0.4363028843794499 | 0.001
138
+ 24 | 0.19186602532863617 | 0.18052668052668053 | 0.6952745610758312 | 0.45443118252910614 | 0.001
139
+ 25 | 0.19193170964717865 | 0.1781011781011781 | 0.6961779911373708 | 0.4465566917300777 | 0.0001
140
+ 26 | 0.19118554890155792 | 0.18225918225918225 | 0.6942802624842929 | 0.441825214268795 | 0.0001
141
+ 27 | 0.19123922288417816 | 0.18087318087318088 | 0.6971996137398262 | 0.449975636684123 | 0.0001
142
+ 28 | 0.19151046872138977 | 0.18572418572418573 | 0.6943913469159402 | 0.44543509037683293 | 0.0001
143
+
144
+
145
+ ---
146
+
147
+ # CO2 Emissions
148
+
149
+ The estimated CO2 emissions for training this model are documented below:
150
+
151
+ - **Emissions**: 0.1871415951855612 grams of CO2
152
+ - **Source**: Code Carbon
153
+ - **Training Type**: fine-tuning
154
+ - **Geographical Location**: Brest, France
155
+ - **Hardware Used**: NVIDIA Tesla V100 PCIe 32 Go
156
+
157
+
158
+ ---
159
+
160
+ # Framework Versions
161
+
162
+ - **Transformers**: 4.44.2
163
+ - **Pytorch**: 2.4.1+cu121
164
+ - **Datasets**: 3.0.0
165
+ - **Tokenizers**: 0.19.1
166
+