Bill Psomas
commited on
Commit
•
6a2c6d4
1
Parent(s):
3ddc092
first model commit
Browse files- README.md +43 -0
- checkpoint.pth +3 -0
- log.txt +99 -0
README.md
CHANGED
@@ -1,3 +1,46 @@
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
+
datasets:
|
4 |
+
- imagenet-1k
|
5 |
+
metrics:
|
6 |
+
- accuracy
|
7 |
+
pipeline_tag: image-classification
|
8 |
+
language:
|
9 |
+
- en
|
10 |
+
tags:
|
11 |
+
- vision transformer
|
12 |
+
- simpool
|
13 |
+
- dino
|
14 |
+
- computer vision
|
15 |
+
- deep learning
|
16 |
---
|
17 |
+
|
18 |
+
# Self-supervised ViT-S/16 (small-sized Vision Transformer with patch size 16) model with SimPool.
|
19 |
+
|
20 |
+
ViT-S model with SimPool (no gamma) trained on ImageNet-1k for 100 epochs. Self-supervision with DINO.
|
21 |
+
SimPool is a simple attention-based pooling method at the end of network, introduced on this ICCV 2023 [paper](https://arxiv.org/pdf/2309.06891.pdf) and released in this [repository](https://github.com/billpsomas/simpool/).
|
22 |
+
Disclaimer: This model card is written by the author of SimPool, i.e. [Bill Psomas](http://users.ntua.gr/psomasbill/).
|
23 |
+
|
24 |
+
## Motivation
|
25 |
+
|
26 |
+
Convolutional networks and vision transformers have different forms of pairwise interactions, pooling across layers and pooling at the end of the network. Does the latter really need to be different?
|
27 |
+
As a by-product of pooling, vision transformers provide spatial attention for free, but this is most often of low quality unless self-supervised, which is not well studied. Is supervision really the problem?
|
28 |
+
|
29 |
+
## Method
|
30 |
+
|
31 |
+
SimPool is a simple attention-based pooling mechanism as a replacement of the default one for both convolutional and transformer encoders. For transformers, we completely discard the [CLS] token.
|
32 |
+
Interestingly, we find that, whether supervised or self-supervised, SimPool improves performance on pre-training and downstream tasks and provides attention maps delineating object boundaries in all cases.
|
33 |
+
One could thus call SimPool universal.
|
34 |
+
|
35 |
+
## BibTeX entry and citation info
|
36 |
+
|
37 |
+
```
|
38 |
+
@misc{psomas2023simpool,
|
39 |
+
title={Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?},
|
40 |
+
author={Bill Psomas and Ioannis Kakogeorgiou and Konstantinos Karantzalos and Yannis Avrithis},
|
41 |
+
year={2023},
|
42 |
+
eprint={2309.06891},
|
43 |
+
archivePrefix={arXiv},
|
44 |
+
primaryClass={cs.CV}
|
45 |
+
}
|
46 |
+
```
|
checkpoint.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:982a7087d5a882b2d5fc0a4e630eba978858433987ad0a7278bf0700ff8e3a4e
|
3 |
+
size 709529005
|
log.txt
ADDED
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{"train_loss": 9.6016574977786, "train_entropy": 8.838686492184712, "train_KL_div": 0.7629709506476022, "train_lr": 0.00023434083952776565, "train_wd": 0.040207133218771285, "epoch": 1}
|
2 |
+
{"train_loss": 8.825038472091608, "train_entropy": 7.349404212797083, "train_KL_div": 1.4756342536309672, "train_lr": 0.00039060059966268974, "train_wd": 0.04056212067116663, "epoch": 2}
|
3 |
+
{"train_loss": 7.290507338257003, "train_entropy": 5.02179586896667, "train_KL_div": 2.268711448786111, "train_lr": 0.0005468603597976139, "train_wd": 0.04109419164666179, "epoch": 3}
|
4 |
+
{"train_loss": 6.68299277572614, "train_entropy": 4.079524801921874, "train_KL_div": 2.6034679803008367, "train_lr": 0.0007031201199325382, "train_wd": 0.041802821055441655, "epoch": 4}
|
5 |
+
{"train_loss": 6.140093786019523, "train_entropy": 3.358423201517341, "train_KL_div": 2.7816706052949325, "train_lr": 0.0008593798800674621, "train_wd": 0.042687309565833706, "epoch": 5}
|
6 |
+
{"train_loss": 5.464924426431733, "train_entropy": 2.6708711366553666, "train_KL_div": 2.7940532900742037, "train_lr": 0.0010156396402023862, "train_wd": 0.043746784294463686, "epoch": 6}
|
7 |
+
{"train_loss": 5.1482936461071604, "train_entropy": 2.3714219727957717, "train_KL_div": 2.776871666656592, "train_lr": 0.0011718994003373105, "train_wd": 0.044980199667686495, "epoch": 7}
|
8 |
+
{"train_loss": 4.607514113746235, "train_entropy": 1.9814146187419373, "train_KL_div": 2.626099488647337, "train_lr": 0.001328159160472235, "train_wd": 0.046386338453440666, "epoch": 8}
|
9 |
+
{"train_loss": 4.443898138629132, "train_entropy": 1.9544253942744572, "train_KL_div": 2.4894727368156735, "train_lr": 0.0014844189206071592, "train_wd": 0.04796381296250994, "epoch": 9}
|
10 |
+
{"train_loss": 4.615895002968679, "train_entropy": 2.144930606648596, "train_KL_div": 2.4709643702407242, "train_lr": 0.0015623425177379259, "train_wd": 0.04971106641800472, "epoch": 10}
|
11 |
+
{"train_loss": 4.467690338275345, "train_entropy": 1.9141723605560408, "train_KL_div": 2.5535179734937405, "train_lr": 0.0015613972639124296, "train_wd": 0.05162637449171258, "epoch": 11}
|
12 |
+
{"train_loss": 4.176681394133249, "train_entropy": 1.6654283631823348, "train_KL_div": 2.5112530118334973, "train_lr": 0.0015595076125481378, "train_wd": 0.0537078470058034, "epoch": 12}
|
13 |
+
{"train_loss": 3.9637591180244436, "train_entropy": 1.53661340361532, "train_KL_div": 2.427145709252819, "train_lr": 0.001556675865894157, "train_wd": 0.055953429798204314, "epoch": 13}
|
14 |
+
{"train_loss": 3.8259563712609106, "train_entropy": 1.4903685098361552, "train_KL_div": 2.3355878653431743, "train_lr": 0.001552905473997598, "train_wd": 0.058360906749811814, "epoch": 14}
|
15 |
+
{"train_loss": 3.780827928471759, "train_entropy": 1.544112473940343, "train_KL_div": 2.2367154601344756, "train_lr": 0.0015482010305001917, "train_wd": 0.060927901971533055, "epoch": 15, "k-NN": {"10": {"top1": 49.384, "top5": 69.216}, "20": {"top1": 49.524, "top5": 71.72}, "100": {"top1": 47.01, "top5": 73.19}, "200": {"top1": 45.186, "top5": 72.35}}}
|
16 |
+
{"train_loss": 3.934916683095161, "train_entropy": 1.7987294602234016, "train_KL_div": 2.13618722973132, "train_lr": 0.0015425682670416805, "train_wd": 0.06365188214900147, "epoch": 16}
|
17 |
+
{"train_loss": 4.273039289372627, "train_entropy": 2.182137804504933, "train_KL_div": 2.090901461906317, "train_lr": 0.0015360140462766817, "train_wd": 0.06653015904265162, "epoch": 17}
|
18 |
+
{"train_loss": 4.518625044696112, "train_entropy": 2.400853268992968, "train_KL_div": 2.1177717566694936, "train_lr": 0.001528546353513584, "train_wd": 0.06955989214068818, "epoch": 18}
|
19 |
+
{"train_loss": 4.588569321180417, "train_entropy": 2.434020743937436, "train_KL_div": 2.154548554700438, "train_lr": 0.0015201742869857219, "train_wd": 0.07273809146232776, "epoch": 19}
|
20 |
+
{"train_loss": 4.608074431565313, "train_entropy": 2.460412021849693, "train_KL_div": 2.147662396536329, "train_lr": 0.0015109080467665377, "train_wd": 0.07606162050854814, "epoch": 20}
|
21 |
+
{"train_loss": 4.600316236013178, "train_entropy": 2.4974706670703326, "train_KL_div": 2.1028455522453537, "train_lr": 0.0015007589223423828, "train_wd": 0.07952719935743567, "epoch": 21}
|
22 |
+
{"train_loss": 4.628953465553167, "train_entropy": 2.5925078118204845, "train_KL_div": 2.0364456559292305, "train_lr": 0.0014897392788580208, "train_wd": 0.08313140790107038, "epoch": 22}
|
23 |
+
{"train_loss": 4.6616803715744295, "train_entropy": 2.68158832544241, "train_KL_div": 1.9800920613217101, "train_lr": 0.0014778625420515791, "train_wd": 0.08687068922076156, "epoch": 23}
|
24 |
+
{"train_loss": 4.705581154546315, "train_entropy": 2.774451395818995, "train_KL_div": 1.9311297706408026, "train_lr": 0.0014651431818973938, "train_wd": 0.09074135309729545, "epoch": 24}
|
25 |
+
{"train_loss": 4.798786098066529, "train_entropy": 2.9162018141249133, "train_KL_div": 1.8825843085597114, "train_lr": 0.0014515966949765462, "train_wd": 0.09473957965273652, "epoch": 25}
|
26 |
+
{"train_loss": 4.917793233364095, "train_entropy": 3.078642743349373, "train_KL_div": 1.8391505121105094, "train_lr": 0.001437239585596674, "train_wd": 0.0988614231201867, "epoch": 26}
|
27 |
+
{"train_loss": 5.090685265835638, "train_entropy": 3.294158518146828, "train_KL_div": 1.7965267640233114, "train_lr": 0.0014220893456840078, "train_wd": 0.1031028157377795, "epoch": 27}
|
28 |
+
{"train_loss": 5.295125598538153, "train_entropy": 3.54745035213206, "train_KL_div": 1.7476752716291704, "train_lr": 0.0014061644334721684, "train_wd": 0.10745957176307085, "epoch": 28}
|
29 |
+
{"train_loss": 5.444263349318042, "train_entropy": 3.7307645231019997, "train_KL_div": 1.713498836193585, "train_lr": 0.001389484251013644, "train_wd": 0.11192739160386189, "epoch": 29}
|
30 |
+
{"train_loss": 5.528454711070886, "train_entropy": 3.8203435055841735, "train_KL_div": 1.7081112107081684, "train_lr": 0.0013720691205413695, "train_wd": 0.11650186606137537, "epoch": 30, "k-NN": {"10": {"top1": 61.428, "top5": 79.59}, "20": {"top1": 61.302, "top5": 81.644}, "100": {"top1": 58.918, "top5": 83.048}, "200": {"top1": 57.33, "top5": 82.586}}}
|
31 |
+
{"train_loss": 5.5919645541091025, "train_entropy": 3.877703906296939, "train_KL_div": 1.7142606556490017, "train_lr": 0.0013539402597092334, "train_wd": 0.12117848068160543, "epoch": 31}
|
32 |
+
{"train_loss": 5.650718631668436, "train_entropy": 3.932058398515861, "train_KL_div": 1.718660220410733, "train_lr": 0.00133511975574162, "train_wd": 0.125952620210536, "epoch": 32}
|
33 |
+
{"train_loss": 5.700442221743996, "train_entropy": 3.9785285108652357, "train_KL_div": 1.7219136994120332, "train_lr": 0.001315630538523546, "train_wd": 0.1308195731488414, "epoch": 33}
|
34 |
+
{"train_loss": 5.73233750821798, "train_entropy": 4.006876481186219, "train_KL_div": 1.7254610160304038, "train_lr": 0.0012954963526640943, "train_wd": 0.13577453640156523, "epoch": 34}
|
35 |
+
{"train_loss": 5.757128716669255, "train_entropy": 4.03149292200599, "train_KL_div": 1.7256357647399467, "train_lr": 0.0012747417285673154, "train_wd": 0.14081262001819686, "epoch": 35}
|
36 |
+
{"train_loss": 5.785474360882677, "train_entropy": 4.058494038130625, "train_KL_div": 1.726980292763582, "train_lr": 0.0012533919525456567, "train_wd": 0.14592885201846242, "epoch": 36}
|
37 |
+
{"train_loss": 5.816792429684848, "train_entropy": 4.091445624456043, "train_KL_div": 1.7253467907930151, "train_lr": 0.0012314730360125248, "train_wd": 0.15111818329906818, "epoch": 37}
|
38 |
+
{"train_loss": 5.858977683070003, "train_entropy": 4.149132025029643, "train_KL_div": 1.7098456411660277, "train_lr": 0.001209011683791361, "train_wd": 0.15637549261655487, "epoch": 38}
|
39 |
+
{"train_loss": 5.854116359031029, "train_entropy": 4.129626232653688, "train_KL_div": 1.7244900942574732, "train_lr": 0.0011860352615799435, "train_wd": 0.16169559164134895, "epoch": 39}
|
40 |
+
{"train_loss": 5.877115037573791, "train_entropy": 4.155965776889492, "train_KL_div": 1.7211492258373609, "train_lr": 0.0011625717626094878, "train_wd": 0.16707323007801708, "epoch": 40}
|
41 |
+
{"train_loss": 5.903606957640222, "train_entropy": 4.184941002088067, "train_KL_div": 1.7186659157257538, "train_lr": 0.00113864977353922, "train_wd": 0.17250310084667156, "epoch": 41}
|
42 |
+
{"train_loss": 5.928309389868801, "train_entropy": 4.212747808845992, "train_KL_div": 1.7155615403959708, "train_lr": 0.0011142984396279119, "train_wd": 0.1779798453204202, "epoch": 42}
|
43 |
+
{"train_loss": 5.9482137703806215, "train_entropy": 4.232158036622161, "train_KL_div": 1.716055698027319, "train_lr": 0.0010895474292249114, "train_wd": 0.1834980586136828, "epoch": 43}
|
44 |
+
{"train_loss": 6.114688973773799, "train_entropy": 4.446778877089725, "train_KL_div": 1.6679100739553823, "train_lr": 0.0010644268976237842, "train_wd": 0.18905229491616105, "epoch": 44}
|
45 |
+
{"train_loss": 6.023165416896232, "train_entropy": 4.327079139561448, "train_KL_div": 1.6960862367824046, "train_lr": 0.0010389674503227715, "train_wd": 0.19463707286719498, "epoch": 45, "k-NN": {"10": {"top1": 63.854, "top5": 81.452}, "20": {"top1": 63.754, "top5": 83.572}, "100": {"top1": 61.288, "top5": 84.686}, "200": {"top1": 59.816, "top5": 84.204}}}
|
46 |
+
{"train_loss": 6.018578177388946, "train_entropy": 4.3088487711532055, "train_KL_div": 1.7097293642872502, "train_lr": 0.0010132001057366771, "train_wd": 0.20024688096520574, "epoch": 46}
|
47 |
+
{"train_loss": 6.0261525496105195, "train_entropy": 4.31718156164248, "train_KL_div": 1.70897094820679, "train_lr": 0.000987156257405728, "train_wd": 0.2058761830068814, "epoch": 47}
|
48 |
+
{"train_loss": 6.049450063057649, "train_entropy": 4.3503140933546405, "train_KL_div": 1.6991359266097064, "train_lr": 0.0009608676357473667, "train_wd": 0.21151942355073927, "epoch": 48}
|
49 |
+
{"train_loss": 6.055034258006738, "train_entropy": 4.353923746267756, "train_KL_div": 1.7011104662909051, "train_lr": 0.0009343662693976334, "train_wd": 0.21717103339968105, "epoch": 49}
|
50 |
+
{"train_loss": 6.071552835837667, "train_entropy": 4.3812845704296395, "train_KL_div": 1.6902682152410957, "train_lr": 0.0009076844461891972, "train_wd": 0.2228254350971123, "epoch": 50}
|
51 |
+
{"train_loss": 6.073528658144925, "train_entropy": 4.382412263931593, "train_KL_div": 1.6911163584449602, "train_lr": 0.0008808546738136253, "train_wd": 0.22847704843122085, "epoch": 51}
|
52 |
+
{"train_loss": 6.083846948691862, "train_entropy": 4.397076711365761, "train_KL_div": 1.686770194055958, "train_lr": 0.0008539096402157426, "train_wd": 0.23412029594197434, "epoch": 52}
|
53 |
+
{"train_loss": 6.093910892556266, "train_entropy": 4.411201068325239, "train_KL_div": 1.6827097911897262, "train_lr": 0.0008268821737684258, "train_wd": 0.2397496084253973, "epoch": 53}
|
54 |
+
{"train_loss": 6.101757640841601, "train_entropy": 4.4230203100809975, "train_KL_div": 1.6787372882536693, "train_lr": 0.0007998052032762803, "train_wd": 0.24535943042970337, "epoch": 54}
|
55 |
+
{"train_loss": 6.122352782932391, "train_entropy": 4.450471238372178, "train_KL_div": 1.6718815083730079, "train_lr": 0.0007727117178569589, "train_wd": 0.25094422573785585, "epoch": 55}
|
56 |
+
{"train_loss": 6.132127726584357, "train_entropy": 4.472274922509256, "train_KL_div": 1.6598527743472373, "train_lr": 0.0007456347267490187, "train_wd": 0.2564984828311481, "epoch": 56}
|
57 |
+
{"train_loss": 6.127266715907217, "train_entropy": 4.461903781201376, "train_KL_div": 1.665362901171247, "train_lr": 0.000718607219095231, "train_wd": 0.26201672032840023, "epoch": 57}
|
58 |
+
{"train_loss": 6.137020083869122, "train_entropy": 4.482766561456802, "train_KL_div": 1.654253490413449, "train_lr": 0.0006916621237504022, "train_wd": 0.2674934923954292, "epoch": 58}
|
59 |
+
{"train_loss": 6.138586384292545, "train_entropy": 4.483335925127699, "train_KL_div": 1.6552504199620413, "train_lr": 0.0006648322691626421, "train_wd": 0.27292339411942207, "epoch": 59}
|
60 |
+
{"train_loss": 6.15009526339715, "train_entropy": 4.508787390219279, "train_KL_div": 1.6413078341984435, "train_lr": 0.0006381503433769356, "train_wd": 0.27830106684293765, "epoch": 60, "k-NN": {"10": {"top1": 65.774, "top5": 82.904}, "20": {"top1": 65.644, "top5": 84.836}, "100": {"top1": 63.442, "top5": 86.106}, "200": {"top1": 61.966, "top5": 85.76}}}
|
61 |
+
{"train_loss": 6.145333716937857, "train_entropy": 4.50656454658821, "train_KL_div": 1.6387691363962198, "train_lr": 0.0006116488542098235, "train_wd": 0.2836212034522523, "epoch": 61}
|
62 |
+
{"train_loss": 6.220877912698278, "train_entropy": 4.623517422378994, "train_KL_div": 1.5973604584134868, "train_lr": 0.0005853600896436113, "train_wd": 0.2888785536148418, "epoch": 62}
|
63 |
+
{"train_loss": 6.200733487789218, "train_entropy": 4.606007674572842, "train_KL_div": 1.5947257799982355, "train_lr": 0.0005593160784884495, "train_wd": 0.2940679289608212, "epoch": 63}
|
64 |
+
{"train_loss": 6.163529145948445, "train_entropy": 4.557968868046236, "train_KL_div": 1.6055602480626567, "train_lr": 0.0005335485513601587, "train_wd": 0.2991842082032451, "epoch": 64}
|
65 |
+
{"train_loss": 6.142964335548215, "train_entropy": 4.536803639750418, "train_KL_div": 1.6061606526486505, "train_lr": 0.0005080889020213638, "train_wd": 0.30422234219219807, "epoch": 65}
|
66 |
+
{"train_loss": 6.140386965332889, "train_entropy": 4.545921476538073, "train_KL_div": 1.5944654384882728, "train_lr": 0.00048296814913302083, "train_wd": 0.309177358897694, "epoch": 66}
|
67 |
+
{"train_loss": 6.147945730631088, "train_entropy": 4.567305964000667, "train_KL_div": 1.5806397286841156, "train_lr": 0.00045821689846297243, "train_wd": 0.3140443683164707, "epoch": 67}
|
68 |
+
{"train_loss": 6.16646631578592, "train_entropy": 4.617882958823334, "train_KL_div": 1.5485833234838364, "train_lr": 0.0004338653055975211, "train_wd": 0.31881856729783326, "epoch": 68}
|
69 |
+
{"train_loss": 6.120591118401248, "train_entropy": 4.557602857404169, "train_KL_div": 1.5629882150743248, "train_lr": 0.00040994303920149656, "train_wd": 0.32349524428378396, "epoch": 69}
|
70 |
+
{"train_loss": 6.092886272666605, "train_entropy": 4.530285849878744, "train_KL_div": 1.5626003856233772, "train_lr": 0.0003864792448715629, "train_wd": 0.3280697839587586, "epoch": 70}
|
71 |
+
{"train_loss": 6.0721086459484495, "train_entropy": 4.51407054978784, "train_KL_div": 1.5580380537746699, "train_lr": 0.00036350250962678915, "train_wd": 0.3325376718043881, "epoch": 71}
|
72 |
+
{"train_loss": 6.05207558392287, "train_entropy": 4.501762950740554, "train_KL_div": 1.5503125908056845, "train_lr": 0.00034104082707977557, "train_wd": 0.3368944985547855, "epoch": 72}
|
73 |
+
{"train_loss": 6.100403502685885, "train_entropy": 4.595429250220073, "train_KL_div": 1.504974210526629, "train_lr": 0.0003191215633307503, "train_wd": 0.3411359645479526, "epoch": 73}
|
74 |
+
{"train_loss": 6.086395905734151, "train_entropy": 4.60185743839498, "train_KL_div": 1.484538421239576, "train_lr": 0.0002977714236261864, "train_wd": 0.34525788396903384, "epoch": 74}
|
75 |
+
{"train_loss": 6.029256072660895, "train_entropy": 4.534781336523755, "train_KL_div": 1.4944747010109798, "train_lr": 0.0002770164198225779, "train_wd": 0.3492561889812042, "epoch": 75, "k-NN": {"10": {"top1": 67.574, "top5": 84.3}, "20": {"top1": 67.506, "top5": 86.126}, "100": {"top1": 65.168, "top5": 87.36}, "200": {"top1": 63.724, "top5": 87.05}}}
|
76 |
+
{"train_loss": 5.997199196468511, "train_entropy": 4.506528301435586, "train_KL_div": 1.4906708450149104, "train_lr": 0.0002568818386949952, "train_wd": 0.35312693374013904, "epoch": 76}
|
77 |
+
{"train_loss": 5.968025019360959, "train_entropy": 4.486546396148123, "train_KL_div": 1.4814785805569821, "train_lr": 0.0002373922111290497, "train_wd": 0.3568662982880826, "epoch": 77}
|
78 |
+
{"train_loss": 5.942441343814414, "train_entropy": 4.471833541886945, "train_KL_div": 1.4706077564328555, "train_lr": 0.00021857128223378125, "train_wd": 0.3604705923236838, "epoch": 78}
|
79 |
+
{"train_loss": 5.917035419929035, "train_entropy": 4.458002773617894, "train_KL_div": 1.4590326010678278, "train_lr": 0.00020044198241190762, "train_wd": 0.36393625884388187, "epoch": 79}
|
80 |
+
{"train_loss": 5.891883353528792, "train_entropy": 4.446675893517303, "train_KL_div": 1.4452074104216157, "train_lr": 0.00018302639942265788, "train_wd": 0.3672598776542355, "epoch": 80}
|
81 |
+
{"train_loss": 5.867708282720886, "train_entropy": 4.436048546893235, "train_KL_div": 1.431659677367818, "train_lr": 0.0001663457514712396, "train_wd": 0.3704381687442457, "epoch": 81}
|
82 |
+
{"train_loss": 5.841794226707182, "train_entropy": 4.425542518170754, "train_KL_div": 1.416251663870174, "train_lr": 0.00015042036135772408, "train_wd": 0.3734679955243245, "epoch": 82}
|
83 |
+
{"train_loss": 5.81447546561013, "train_entropy": 4.4156102033088835, "train_KL_div": 1.3988652084671953, "train_lr": 0.0001352696317168465, "train_wd": 0.37634636792123755, "epoch": 83}
|
84 |
+
{"train_loss": 5.789193666256792, "train_entropy": 4.406641286101511, "train_KL_div": 1.3825523365500716, "train_lr": 0.00012091202137888328, "train_wd": 0.37907044532893736, "epoch": 84}
|
85 |
+
{"train_loss": 5.76402064936374, "train_entropy": 4.398469194667478, "train_KL_div": 1.365551417885461, "train_lr": 0.00010736502288041127, "train_wd": 0.3816375394119056, "epoch": 85}
|
86 |
+
{"train_loss": 5.737998435975312, "train_entropy": 4.390824821853847, "train_KL_div": 1.347173576305465, "train_lr": 9.46451411523452e-05, "train_wd": 0.3840451167582112, "epoch": 86}
|
87 |
+
{"train_loss": 5.712686547482483, "train_entropy": 4.3830633081546955, "train_KL_div": 1.32962318763444, "train_lr": 8.27678734112232e-05, "train_wd": 0.38629080137968236, "epoch": 87}
|
88 |
+
{"train_loss": 5.686612793015808, "train_entropy": 4.376272977198905, "train_KL_div": 1.3103397655941262, "train_lr": 7.174769027823528e-05, "train_wd": 0.38837237705672434, "epoch": 88}
|
89 |
+
{"train_loss": 5.662975046502137, "train_entropy": 4.370640159397852, "train_KL_div": 1.292334841549173, "train_lr": 6.159801814899952e-05, "train_wd": 0.39028778952545473, "epoch": 89}
|
90 |
+
{"train_loss": 5.639817414545849, "train_entropy": 4.3655848089305405, "train_KL_div": 1.2742325575611577, "train_lr": 5.233122283556849e-05, "train_wd": 0.3920351485050172, "epoch": 90, "k-NN": {"10": {"top1": 69.646, "top5": 85.774}, "20": {"top1": 69.506, "top5": 87.402}, "100": {"top1": 67.248, "top5": 88.596}, "200": {"top1": 65.87, "top5": 88.328}}}
|
91 |
+
{"train_loss": 5.617757483842148, "train_entropy": 4.361931033818741, "train_KL_div": 1.255826400210156, "train_lr": 4.3958594500591234e-05, "train_wd": 0.39361272956306115, "epoch": 91}
|
92 |
+
{"train_loss": 5.598055835964529, "train_entropy": 4.358406307677192, "train_KL_div": 1.2396494818340609, "train_lr": 3.6490333901989545e-05, "train_wd": 0.3950189758175438, "epoch": 92}
|
93 |
+
{"train_loss": 5.57807337374035, "train_entropy": 4.355745730513264, "train_KL_div": 1.2223276046246085, "train_lr": 2.9935539964905247e-05, "train_wd": 0.39625249947319163, "epoch": 93}
|
94 |
+
{"train_loss": 5.563134756965387, "train_entropy": 4.354037230142127, "train_KL_div": 1.2090974785503859, "train_lr": 2.4302198696062125e-05, "train_wd": 0.39731208319108297, "epoch": 94}
|
95 |
+
{"train_loss": 5.5491939872447436, "train_entropy": 4.352705692608307, "train_KL_div": 1.196488237903127, "train_lr": 1.9597173454046613e-05, "train_wd": 0.39819668129001096, "epoch": 95}
|
96 |
+
{"train_loss": 5.537301711482156, "train_entropy": 4.351940518092692, "train_KL_div": 1.1853611436599794, "train_lr": 1.582619658736243e-05, "train_wd": 0.3989054207784532, "epoch": 96}
|
97 |
+
{"train_loss": 5.527070889541464, "train_entropy": 4.350602566525759, "train_KL_div": 1.176468279294153, "train_lr": 1.2993862450447075e-05, "train_wd": 0.39943760221609825, "epoch": 97}
|
98 |
+
{"train_loss": 5.520929540193952, "train_entropy": 4.351493565590213, "train_KL_div": 1.1694359276711308, "train_lr": 1.1103621806158807e-05, "train_wd": 0.3997927004041218, "epoch": 98}
|
99 |
+
{"train_loss": 5.515372934042104, "train_entropy": 4.35136944545127, "train_KL_div": 1.1640034516264617, "train_lr": 1.0157777621553632e-05, "train_wd": 0.39997036490348065, "epoch": 99, "k-NN": {"10": {"top1": 69.778, "top5": 85.91}, "20": {"top1": 69.602, "top5": 87.54}, "100": {"top1": 67.318, "top5": 88.674}, "200": {"top1": 65.966, "top5": 88.404}}}
|