whyu commited on
Commit
9051185
1 Parent(s): 08057ff

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - image-classification
5
+ datasets:
6
+ - imagenet
7
+ ---
8
+
9
+ # PoolFormer (S24 model)
10
+
11
+ PoolFormer model trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu et al. and first released in [this repository](https://github.com/sail-sg/poolformer).
12
+
13
+
14
+ ## Model description
15
+
16
+ PoolFormer is a model that replaces attention token mixer in transfomrers with extremely simple operator, pooling.
17
+
18
+ Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
19
+
20
+ ## Intended uses & limitations
21
+
22
+ You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=sail/poolformer) to look for fine-tuned versions on a task that interests you.
23
+
24
+ ### How to use
25
+
26
+ Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
27
+
28
+ ```python
29
+ from transformers import PoolFormerFeatureExtractor, PoolFormerForImageClassification
30
+ from PIL import Image
31
+ import requests
32
+ url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
33
+ image = Image.open(requests.get(url, stream=True).raw)
34
+ feature_extractor = PoolFormerFeatureExtractor.from_pretrained('sail/poolformer_s24')
35
+ model = PoolFormerForImageClassification.from_pretrained('sail/poolformer_s24')
36
+ inputs = feature_extractor(images=image, return_tensors="pt")
37
+ outputs = model(**inputs)
38
+ logits = outputs.logits
39
+ # model predicts one of the 1000 ImageNet classes
40
+ predicted_class_idx = logits.argmax(-1).item()
41
+ print("Predicted class:", model.config.id2label[predicted_class_idx])
42
+ ```
43
+
44
+ Currently, both the feature extractor and model support PyTorch.
45
+
46
+ ## Training data
47
+
48
+ The poolformer model was trained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
49
+
50
+ ## Training procedure
51
+
52
+ ### Preprocessing
53
+
54
+ The exact details of preprocessing of images during training/validation can be found [here](https://github.com/sail-sg/poolformer/blob/main/train.py#L529-L572).
55
+
56
+
57
+ ### Pretraining
58
+
59
+ The model was trained on TPU-v3s. Training resolution is 224. For all hyperparameters (such as batch size and learning rate), please refer to the original paper.
60
+
61
+ ## Evaluation results
62
+
63
+ | Model | ImageNet top-1 accuracy | # params | URL |
64
+ |---------------------------------------|-------------------------|----------|------------------------------------------------------------------|
65
+ | PoolFormer-S12 | 77.2 | 12M | https://huggingface.co/sail/poolformer_s12 |
66
+ | **PoolFormer-S24** | **80.3** | **21M** | **https://huggingface.co/sail/poolformer_s24** |
67
+ | PoolFormer-S36 | 81.4 | 31M | https://huggingface.co/sail/poolformer_s36 |
68
+ | PoolFormer-M36 | 82.1 | 56M | https://huggingface.co/sail/poolformer_m36 |
69
+ | PoolFormer-M48 | 82.5 | 73M | https://huggingface.co/sail/poolformer_m48 |
70
+
71
+
72
+ ### BibTeX entry and citation info
73
+
74
+ ```bibtex
75
+ @article{yu2021metaformer,
76
+ title={MetaFormer is Actually What You Need for Vision},
77
+ author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},
78
+ journal={arXiv preprint arXiv:2111.11418},
79
+ year={2021}
80
+ }
81
+ ```