jaxnwagner commited on
Commit
08578d7
1 Parent(s): 7eca81c

End of training

Browse files
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: microsoft/conditional-detr-resnet-50
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: msoft_detr_finetuned_cppe5_5
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # msoft_detr_finetuned_cppe5_5
15
+
16
+ This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 1.6502
19
+ - Map: 0.1506
20
+ - Map 50: 0.3241
21
+ - Map 75: 0.1179
22
+ - Map Small: 0.0383
23
+ - Map Medium: 0.1155
24
+ - Map Large: 0.2123
25
+ - Mar 1: 0.1856
26
+ - Mar 10: 0.3707
27
+ - Mar 100: 0.3953
28
+ - Mar Small: 0.1845
29
+ - Mar Medium: 0.3383
30
+ - Mar Large: 0.5613
31
+ - Map Coverall: 0.4385
32
+ - Mar 100 Coverall: 0.6243
33
+ - Map Face Shield: 0.0432
34
+ - Mar 100 Face Shield: 0.3886
35
+ - Map Gloves: 0.0719
36
+ - Mar 100 Gloves: 0.3103
37
+ - Map Goggles: 0.0309
38
+ - Mar 100 Goggles: 0.2846
39
+ - Map Mask: 0.1683
40
+ - Mar 100 Mask: 0.3684
41
+
42
+ ## Model description
43
+
44
+ More information needed
45
+
46
+ ## Intended uses & limitations
47
+
48
+ More information needed
49
+
50
+ ## Training and evaluation data
51
+
52
+ More information needed
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 5e-05
60
+ - train_batch_size: 8
61
+ - eval_batch_size: 8
62
+ - seed: 42
63
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
+ - lr_scheduler_type: cosine
65
+ - num_epochs: 20
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
70
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:------:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
71
+ | No log | 1.0 | 50 | 3.3082 | 0.0005 | 0.0023 | 0.0 | 0.0005 | 0.0008 | 0.0004 | 0.0007 | 0.0115 | 0.0303 | 0.0124 | 0.0346 | 0.0328 | 0.0006 | 0.0631 | 0.0 | 0.0 | 0.0015 | 0.0531 | 0.0 | 0.0 | 0.0002 | 0.0351 |
72
+ | No log | 2.0 | 100 | 2.5406 | 0.0093 | 0.0254 | 0.0058 | 0.0083 | 0.0184 | 0.0093 | 0.0353 | 0.1003 | 0.1367 | 0.0596 | 0.1127 | 0.176 | 0.0281 | 0.2554 | 0.0012 | 0.038 | 0.003 | 0.1402 | 0.0012 | 0.0246 | 0.0133 | 0.2253 |
73
+ | No log | 3.0 | 150 | 2.2947 | 0.0378 | 0.097 | 0.0256 | 0.0102 | 0.0315 | 0.0531 | 0.0876 | 0.188 | 0.2363 | 0.1059 | 0.1821 | 0.3366 | 0.1268 | 0.4113 | 0.0159 | 0.119 | 0.0042 | 0.2004 | 0.0175 | 0.1492 | 0.0248 | 0.3013 |
74
+ | No log | 4.0 | 200 | 2.1994 | 0.0417 | 0.0962 | 0.0321 | 0.0092 | 0.032 | 0.0541 | 0.0946 | 0.2085 | 0.2556 | 0.0886 | 0.194 | 0.3656 | 0.1454 | 0.4829 | 0.0148 | 0.1165 | 0.0059 | 0.1897 | 0.0075 | 0.1585 | 0.0349 | 0.3302 |
75
+ | No log | 5.0 | 250 | 2.1116 | 0.0609 | 0.1345 | 0.0457 | 0.0149 | 0.0454 | 0.0779 | 0.1193 | 0.2433 | 0.2826 | 0.1029 | 0.2107 | 0.4202 | 0.2176 | 0.5306 | 0.0255 | 0.1861 | 0.011 | 0.2179 | 0.009 | 0.1738 | 0.0417 | 0.3044 |
76
+ | No log | 6.0 | 300 | 2.0783 | 0.0523 | 0.1158 | 0.043 | 0.0143 | 0.0381 | 0.0587 | 0.1021 | 0.2353 | 0.2789 | 0.1027 | 0.2118 | 0.3719 | 0.1791 | 0.5491 | 0.0137 | 0.1785 | 0.0126 | 0.1902 | 0.0116 | 0.1846 | 0.0448 | 0.292 |
77
+ | No log | 7.0 | 350 | 2.0252 | 0.0686 | 0.1619 | 0.0602 | 0.0155 | 0.051 | 0.0825 | 0.1253 | 0.2744 | 0.31 | 0.1124 | 0.2388 | 0.4518 | 0.2246 | 0.5617 | 0.0207 | 0.2532 | 0.0141 | 0.2326 | 0.0121 | 0.1708 | 0.0713 | 0.332 |
78
+ | No log | 8.0 | 400 | 1.9021 | 0.0952 | 0.2082 | 0.076 | 0.0155 | 0.0727 | 0.1111 | 0.1262 | 0.2869 | 0.3243 | 0.1054 | 0.2691 | 0.4416 | 0.3488 | 0.6284 | 0.0199 | 0.2114 | 0.0274 | 0.25 | 0.0109 | 0.2015 | 0.0689 | 0.3302 |
79
+ | No log | 9.0 | 450 | 1.8629 | 0.1124 | 0.2367 | 0.1016 | 0.0268 | 0.0809 | 0.1382 | 0.1613 | 0.3163 | 0.3522 | 0.1259 | 0.2844 | 0.5206 | 0.372 | 0.6162 | 0.0335 | 0.3 | 0.0289 | 0.267 | 0.0193 | 0.2262 | 0.1082 | 0.3516 |
80
+ | 3.5058 | 10.0 | 500 | 1.7706 | 0.1132 | 0.2447 | 0.0912 | 0.0249 | 0.0863 | 0.1384 | 0.1611 | 0.3268 | 0.3662 | 0.1789 | 0.3043 | 0.5176 | 0.3747 | 0.5905 | 0.0239 | 0.3051 | 0.0317 | 0.3009 | 0.0284 | 0.2415 | 0.1074 | 0.3929 |
81
+ | 3.5058 | 11.0 | 550 | 1.7552 | 0.1238 | 0.2833 | 0.0924 | 0.032 | 0.1008 | 0.1517 | 0.1603 | 0.3388 | 0.3762 | 0.1731 | 0.3242 | 0.5294 | 0.3793 | 0.5757 | 0.0325 | 0.362 | 0.0424 | 0.308 | 0.0275 | 0.2554 | 0.1374 | 0.38 |
82
+ | 3.5058 | 12.0 | 600 | 1.7298 | 0.1275 | 0.2938 | 0.0959 | 0.0361 | 0.0978 | 0.1648 | 0.1692 | 0.3493 | 0.382 | 0.1624 | 0.3309 | 0.5347 | 0.3959 | 0.6158 | 0.0438 | 0.381 | 0.0477 | 0.3022 | 0.0255 | 0.2662 | 0.1247 | 0.3449 |
83
+ | 3.5058 | 13.0 | 650 | 1.7136 | 0.136 | 0.2982 | 0.0999 | 0.0341 | 0.1025 | 0.1757 | 0.1758 | 0.3593 | 0.3918 | 0.178 | 0.3362 | 0.55 | 0.4242 | 0.6225 | 0.0469 | 0.381 | 0.0507 | 0.3022 | 0.0258 | 0.2862 | 0.1326 | 0.3671 |
84
+ | 3.5058 | 14.0 | 700 | 1.6856 | 0.1451 | 0.319 | 0.1159 | 0.0361 | 0.1075 | 0.1986 | 0.1834 | 0.3631 | 0.395 | 0.1736 | 0.3379 | 0.5598 | 0.4343 | 0.641 | 0.0486 | 0.381 | 0.0599 | 0.3076 | 0.0298 | 0.2754 | 0.1527 | 0.3698 |
85
+ | 3.5058 | 15.0 | 750 | 1.6613 | 0.148 | 0.3162 | 0.1197 | 0.0394 | 0.1165 | 0.1989 | 0.1836 | 0.3721 | 0.399 | 0.1881 | 0.3358 | 0.5663 | 0.4398 | 0.6365 | 0.0451 | 0.3962 | 0.0639 | 0.3058 | 0.0348 | 0.2877 | 0.1563 | 0.3689 |
86
+ | 3.5058 | 16.0 | 800 | 1.6491 | 0.1487 | 0.3267 | 0.1178 | 0.0406 | 0.118 | 0.2019 | 0.19 | 0.3722 | 0.3981 | 0.1847 | 0.3418 | 0.5637 | 0.4385 | 0.6293 | 0.0404 | 0.381 | 0.068 | 0.3098 | 0.0319 | 0.3 | 0.1646 | 0.3702 |
87
+ | 3.5058 | 17.0 | 850 | 1.6468 | 0.1489 | 0.3263 | 0.1187 | 0.0374 | 0.1188 | 0.2089 | 0.1894 | 0.3708 | 0.3978 | 0.1931 | 0.3406 | 0.5609 | 0.4355 | 0.6288 | 0.0426 | 0.3848 | 0.0693 | 0.3129 | 0.0314 | 0.2892 | 0.1658 | 0.3733 |
88
+ | 3.5058 | 18.0 | 900 | 1.6533 | 0.1487 | 0.3212 | 0.1177 | 0.0356 | 0.1137 | 0.2078 | 0.1856 | 0.3703 | 0.3964 | 0.1931 | 0.3356 | 0.5642 | 0.4369 | 0.6266 | 0.0412 | 0.3899 | 0.0721 | 0.3098 | 0.0292 | 0.2846 | 0.164 | 0.3711 |
89
+ | 3.5058 | 19.0 | 950 | 1.6509 | 0.1503 | 0.3241 | 0.1176 | 0.0383 | 0.1153 | 0.2121 | 0.1843 | 0.3705 | 0.3953 | 0.1848 | 0.3379 | 0.5623 | 0.4378 | 0.6234 | 0.0433 | 0.3899 | 0.0713 | 0.3103 | 0.0309 | 0.2846 | 0.1681 | 0.3684 |
90
+ | 1.4402 | 20.0 | 1000 | 1.6502 | 0.1506 | 0.3241 | 0.1179 | 0.0383 | 0.1155 | 0.2123 | 0.1856 | 0.3707 | 0.3953 | 0.1845 | 0.3383 | 0.5613 | 0.4385 | 0.6243 | 0.0432 | 0.3886 | 0.0719 | 0.3103 | 0.0309 | 0.2846 | 0.1683 | 0.3684 |
91
+
92
+
93
+ ### Framework versions
94
+
95
+ - Transformers 4.44.0
96
+ - Pytorch 2.5.0+cu124
97
+ - Datasets 2.21.0
98
+ - Tokenizers 0.19.1
runs/Oct21_16-07-53_jackson-ThinkPad-T15g-Gen-1-1/events.out.tfevents.1729552078.jackson-ThinkPad-T15g-Gen-1-1.56935.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2bf5b55043ab4bf4df78641c8b7fb21c1362d454468763d949e592ba4a13ec12
3
- size 33977
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bde82f8fd1cf7a3315adb114157adde465553e5c8f21c0ba9d22e40a6e8eb689
3
+ size 35791