billpsomas
commited on
Commit
•
6bb7d0d
1
Parent(s):
4e7652d
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,35 @@
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
+
datasets:
|
4 |
+
- imagenet-1k
|
5 |
+
metrics:
|
6 |
+
- accuracy
|
7 |
+
pipeline_tag: image-classification
|
8 |
+
language:
|
9 |
+
- en
|
10 |
+
tags:
|
11 |
+
- vision transformer
|
12 |
+
- simpool
|
13 |
+
- computer vision
|
14 |
+
- deep learning
|
15 |
---
|
16 |
+
|
17 |
+
# Supervised ViT-S/16 (small-sized Vision Transformer with patch size 16) model
|
18 |
+
|
19 |
+
ViT-S official model trained on ImageNet-1k for 100 epochs. Reproduced for ICCV 2023 [SimPool](https://arxiv.org/abs/2309.06891) paper.
|
20 |
+
|
21 |
+
SimPool is a simple attention-based pooling method at the end of network, released in this [repository](https://github.com/billpsomas/simpool/).
|
22 |
+
Disclaimer: This model card is written by the author of SimPool, i.e. [Bill Psomas](http://users.ntua.gr/psomasbill/).
|
23 |
+
|
24 |
+
## BibTeX entry and citation info
|
25 |
+
|
26 |
+
```
|
27 |
+
@misc{psomas2023simpool,
|
28 |
+
title={Keep It SimPool: Who Said Supervised Transformers Suffer from Attention Deficit?},
|
29 |
+
author={Bill Psomas and Ioannis Kakogeorgiou and Konstantinos Karantzalos and Yannis Avrithis},
|
30 |
+
year={2023},
|
31 |
+
eprint={2309.06891},
|
32 |
+
archivePrefix={arXiv},
|
33 |
+
primaryClass={cs.CV}
|
34 |
+
}
|
35 |
+
```
|