File size: 6,251 Bytes
3fbc615
5e40b1d
 
 
 
 
 
 
 
3fbc615
5e40b1d
 
 
 
 
 
 
3fbc615
5e40b1d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
tags:
- vision
- image-classification
datasets:
- imagenet
metrics:
- accuracy
library_tag: MogaNet
license: apache-2.0
language:
- en
library_name: timm
pipeline_tag: image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
  example_title: Tiger
---

# Model card for moganet_xtiny_256_in1k

MogaNet a new family of efficient ConvNets with preferable parameter-performance trade-offs, which is trained on ImageNet-1k (1 million images, 1,000 classes). It was first introduced in the paper [MogaNet](https://arxiv.org/abs/2211.03295) and released in [Westlake/MogaNet](https://github.com/Westlake-AI/MogaNet) and [Westlake/openmixup](https://github.com/Westlake-AI/openmixup).

## Description

Since the recent success of Vision Transformers (ViTs), explorations toward ViT-style architectures have triggered the resurgence of ConvNets. In this work, we explore the representation ability of modern ConvNets from a novel view of multi-order game-theoretic interaction, which reflects inter-variable interaction effects w.r.t. contexts of different scales based on game theory. Within the modern ConvNet framework, we tailor the two feature mixers with conceptually simple yet effective depthwise convolutions to facilitate middle-order information across spatial and channel spaces respectively. In this light, a new family of pure ConvNet architecture, dubbed MogaNet, is proposed, which shows excellent scalability and attains competitive results among state-of-the-art models with more efficient use of parameters on ImageNet and multifarious typical vision benchmarks, including COCO object detection, ADE20K semantic segmentation, 2D\&3D human pose estimation and video prediction.Typically, MogaNet hits 80.0\% and 87.8\% top-1 accuracy with 5.2M and 181M parameters on ImageNet, outperforming ParC-Net-S and ConvNeXt-L while saving 59\% FLOPs and 17M parameters.

![model image](https://user-images.githubusercontent.com/44519745/224821476-843a1814-1894-4fa7-b919-551f0a183856.jpg)

## Model Usage

Setup before using the model.
```bash
git clone https://github.com/Westlake-AI/MogaNet
cd MogaNet
```

### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
import models

img = Image.open(
    urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))

model = timm.create_model('moganet_xtiny_1k_sz256', pretrained=True)
model = model.eval()

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

output = model(transforms(img).unsqueeze(0))  # unsqueeze single image into batch of 1

top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```

### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm

img = Image.open(
    urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))

model = timm.create_model(
    'moganet_xtiny_1k_sz256',
    pretrained=True,
    fork_feat=True,
)
model = model.eval()

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

output = model(transforms(img).unsqueeze(0))  # unsqueeze single image into batch of 1

for o in output:
    # print shape of each feature map in output
    print(o.shape)
```

## Model Comparison

| Model | Resolution | Params (M) | Flops (G) | Top-1 / top-5 (%) | Download |
|---|:---:|:---:|:---:|:---:|:---:|
| moganet_xtiny_224_in1k | 224x224 | 2.97 | 0.80 | 76.5 / 93.4 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_xtiny_sz224_8xbs128_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_xtiny_224_in1k) |
| moganet_xtiny_256_in1k | 256x256 | 2.97 | 1.04 | 77.2 / 93.8 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_xtiny_sz256_8xbs128_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_xtiny_256_in1k) |
| moganet_tiny_224_in1k | 224x224 | 5.20 | 1.10 | 79.0 / 94.6 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_tiny_sz224_8xbs128_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_tiny_224_in1k) |
| moganet_tiny_256_in1k | 256x256 | 5.20 | 1.44 | 79.6 / 94.9 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_tiny_sz256_8xbs128_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_tiny_256_in1k) |
| moganet_small_224_in1k | 224x224 | 25.3 | 4.97 | 83.4 / 96.9 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_small_sz224_8xbs128_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_small_224_in1k) |
| moganet_base_224_in1k | 224x224 | 43.9 | 9.93 | 84.3 / 97.0 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_base_sz224_8xbs128_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_base_224_in1k) |
| moganet_large_224_in1k | 224x224 | 82.5 | 15.9 | 84.7 / 97.1 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_large_sz224_8xbs64_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_large_224_in1k) |
| moganet_xlarge_224_in1k | 224x224 | 180.8 | 34.5 | 85.1 / 97.4 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_xlarge_sz224_8xbs64_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_xlarge_224_in1k) |

## Citation
```bibtex
@article{Li2022MogaNet,
  title={Efficient Multi-order Gated Aggregation Network},
  author={Siyuan Li and Zedong Wang and Zicheng Liu and Cheng Tan and Haitao Lin and Di Wu and Zhiyuan Chen and Jiangbin Zheng and Stan Z. Li},
  journal={ArXiv},
  year={2022},
  volume={abs/2211.03295}
}
```