timm
/

Image Classification
timm
PyTorch
Safetensors
rwightman HF staff commited on
Commit
24ac112
1 Parent(s): ef0fea2
Files changed (4) hide show
  1. README.md +182 -0
  2. config.json +33 -0
  3. model.safetensors +3 -0
  4. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - image-classification
4
+ - timm
5
+ library_name: timm
6
+ license: apache-2.0
7
+ datasets:
8
+ - imagenet-12k
9
+ ---
10
+ # Model card for vit_little_patch16_reg1_gap_256.sbb_in12k
11
+
12
+ A Vision Transformer (ViT) image classification model. This is a `timm` specific variation of the architecture with registers, global average pooling.
13
+
14
+ There are a number of models in the lower end of model scales that originate in `timm`:
15
+
16
+ | variant | width | mlp width (mult) | heads | depth | timm orig |
17
+ | ------- | ----- | ---------------- | ----- | ----- | ---- |
18
+ | tiny | 192 | 768 (4) | 3 | 12 | n |
19
+ | wee | 256 | 1280 (5) | 4 | 14 | y |
20
+ | pwee | 256 | 1280 (5) | 4 | 16 (parallel) | y |
21
+ | small | 384 | 1536 (4) | 6 | 12 | n |
22
+ | little | 320 | 1792 (5.6) | 5 | 14 | y |
23
+ | medium | 512 | 2048 (4) | 8 | 12 | y |
24
+ | mediumd | 512 | 2048 (4) | 8 | 20 | y |
25
+ | betwixt | 640 | 2560 (4) | 10 | 12 | y |
26
+ | base | 768 | 3072 (4) | 12 | 12 | n |
27
+
28
+ Trained on ImageNet-12k by Ross Wightman in `timm` using recipe template described below.
29
+
30
+ Recipe details:
31
+ * Searching for better baselines. Influced by Swin/DeiT/DeiT-III but w/ increased weight decay, moderate (in12k) to high (in1k) augmentation. Layer-decay used for fine-tune. Some runs used BCE and/or NAdamW instead of AdamW.
32
+ * See [train_hparams.yaml](./train_hparams.yaml) for specifics of each model.
33
+
34
+
35
+ ## Model Details
36
+ - **Model Type:** Image classification / feature backbone
37
+ - **Model Stats:**
38
+ - Params (M): 26.0
39
+ - GMACs: 5.7
40
+ - Activations (M): 12.3
41
+ - Image size: 256 x 256
42
+ - **Papers:**
43
+ - Vision Transformers Need Registers: https://arxiv.org/abs/2309.16588
44
+ - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
45
+ - **Dataset:** ImageNet-12k
46
+ - **Original:** https://github.com/huggingface/pytorch-image-models
47
+
48
+ ## Model Usage
49
+ ### Image Classification
50
+ ```python
51
+ from urllib.request import urlopen
52
+ from PIL import Image
53
+ import timm
54
+
55
+ img = Image.open(urlopen(
56
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
57
+ ))
58
+
59
+ model = timm.create_model('vit_little_patch16_reg1_gap_256.sbb_in12k', pretrained=True)
60
+ model = model.eval()
61
+
62
+ # get model specific transforms (normalization, resize)
63
+ data_config = timm.data.resolve_model_data_config(model)
64
+ transforms = timm.data.create_transform(**data_config, is_training=False)
65
+
66
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
67
+
68
+ top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
69
+ ```
70
+
71
+ ### Feature Map Extraction
72
+ ```python
73
+ from urllib.request import urlopen
74
+ from PIL import Image
75
+ import timm
76
+
77
+ img = Image.open(urlopen(
78
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
79
+ ))
80
+
81
+ model = timm.create_model(
82
+ 'vit_little_patch16_reg1_gap_256.sbb_in12k',
83
+ pretrained=True,
84
+ features_only=True,
85
+ )
86
+ model = model.eval()
87
+
88
+ # get model specific transforms (normalization, resize)
89
+ data_config = timm.data.resolve_model_data_config(model)
90
+ transforms = timm.data.create_transform(**data_config, is_training=False)
91
+
92
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
93
+
94
+ for o in output:
95
+ # print shape of each feature map in output
96
+ # e.g.:
97
+ # torch.Size([1, 320, 16, 16])
98
+ # torch.Size([1, 320, 16, 16])
99
+ # torch.Size([1, 320, 16, 16])
100
+
101
+ print(o.shape)
102
+ ```
103
+
104
+ ### Image Embeddings
105
+ ```python
106
+ from urllib.request import urlopen
107
+ from PIL import Image
108
+ import timm
109
+
110
+ img = Image.open(urlopen(
111
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
112
+ ))
113
+
114
+ model = timm.create_model(
115
+ 'vit_little_patch16_reg1_gap_256.sbb_in12k',
116
+ pretrained=True,
117
+ num_classes=0, # remove classifier nn.Linear
118
+ )
119
+ model = model.eval()
120
+
121
+ # get model specific transforms (normalization, resize)
122
+ data_config = timm.data.resolve_model_data_config(model)
123
+ transforms = timm.data.create_transform(**data_config, is_training=False)
124
+
125
+ output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
126
+
127
+ # or equivalently (without needing to set num_classes=0)
128
+
129
+ output = model.forward_features(transforms(img).unsqueeze(0))
130
+ # output is unpooled, a (1, 257, 320) shaped tensor
131
+
132
+ output = model.forward_head(output, pre_logits=True)
133
+ # output is a (1, num_features) shaped tensor
134
+ ```
135
+
136
+ ## Model Comparison
137
+ | model | top1 | top5 | param_count | img_size |
138
+ | -------------------------------------------------- | ------ | ------ | ----------- | -------- |
139
+ | [vit_mediumd_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 86.202 | 97.874 | 64.11 | 256 |
140
+ | [vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 85.418 | 97.480 | 60.4 | 256 |
141
+ | [vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 84.930 | 97.386 | 38.88 | 256 |
142
+ | [vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k) | 83.774 | 96.972 | 22.52 | 256 |
143
+ | [vit_mediumd_patch16_rope_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_mediumd_patch16_rope_reg1_gap_256.sbb_in1k) | 84.322 | 96.812 | 63.95 | 256 |
144
+ | [vit_betwixt_patch16_rope_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_rope_reg4_gap_256.sbb_in1k) | 83.906 | 96.684 | 60.23 | 256 |
145
+ | [vit_base_patch16_rope_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_base_patch16_rope_reg1_gap_256.sbb_in1k) | 83.866 | 96.67 | 86.43 | 256 |
146
+ | [vit_medium_patch16_rope_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_rope_reg1_gap_256.sbb_in1k) | 83.81 | 96.824 | 38.74 | 256 |
147
+ | [vit_betwixt_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb_in1k) | 83.706 | 96.616 | 60.4 | 256 |
148
+ | [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 83.628 | 96.544 | 60.4 | 256 |
149
+ | [vit_medium_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in1k) | 83.47 | 96.622 | 38.88 | 256 |
150
+ | [vit_medium_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_reg1_gap_256.sbb_in1k) | 83.462 | 96.548 | 38.88 | 256 |
151
+ | [vit_little_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_little_patch16_reg4_gap_256.sbb_in1k) | 82.514 | 96.262 | 22.52 | 256 |
152
+ | [vit_wee_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_wee_patch16_reg1_gap_256.sbb_in1k) | 80.258 | 95.360 | 13.42 | 256 |
153
+ | [vit_pwee_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_pwee_patch16_reg1_gap_256.sbb_in1k) | 80.072 | 95.136 | 15.25 | 256 |
154
+
155
+ ## Citation
156
+ ```bibtex
157
+ @misc{rw2019timm,
158
+ author = {Ross Wightman},
159
+ title = {PyTorch Image Models},
160
+ year = {2019},
161
+ publisher = {GitHub},
162
+ journal = {GitHub repository},
163
+ doi = {10.5281/zenodo.4414861},
164
+ howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
165
+ }
166
+ ```
167
+ ```bibtex
168
+ @article{darcet2023vision,
169
+ title={Vision Transformers Need Registers},
170
+ author={Darcet, Timoth{'e}e and Oquab, Maxime and Mairal, Julien and Bojanowski, Piotr},
171
+ journal={arXiv preprint arXiv:2309.16588},
172
+ year={2023}
173
+ }
174
+ ```
175
+ ```bibtex
176
+ @article{dosovitskiy2020vit,
177
+ title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
178
+ author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
179
+ journal={ICLR},
180
+ year={2021}
181
+ }
182
+ ```
config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architecture": "vit_little_patch16_reg1_gap_256",
3
+ "num_classes": 11821,
4
+ "num_features": 320,
5
+ "global_pool": "avg",
6
+ "pretrained_cfg": {
7
+ "tag": "sbb_in12k",
8
+ "custom_load": false,
9
+ "input_size": [
10
+ 3,
11
+ 256,
12
+ 256
13
+ ],
14
+ "fixed_input_size": true,
15
+ "interpolation": "bicubic",
16
+ "crop_pct": 0.95,
17
+ "crop_mode": "center",
18
+ "mean": [
19
+ 0.5,
20
+ 0.5,
21
+ 0.5
22
+ ],
23
+ "std": [
24
+ 0.5,
25
+ 0.5,
26
+ 0.5
27
+ ],
28
+ "num_classes": 11821,
29
+ "pool_size": null,
30
+ "first_conv": "patch_embed.proj",
31
+ "classifier": "head"
32
+ }
33
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67542fa64125e7e955f38296911b147b909c31baf93f71463adf51357c84214d
3
+ size 103972820
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd8797570a54f4db1c0213baf52aee57c5fce11b90edddb10f0ac0bbec3a4007
3
+ size 104028666