|
# ECA-ResNet |
|
|
|
An **ECA ResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that utilises an [Efficient Channel Attention module](https://paperswithcode.com/method/efficient-channel-attention). Efficient Channel Attention is an architectural unit based on [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) that reduces model complexity without dimensionality reduction. |
|
|
|
## How do I use this model on an image? |
|
|
|
To load a pretrained model: |
|
|
|
```py |
|
>>> import timm |
|
>>> model = timm.create_model('ecaresnet101d', pretrained=True) |
|
>>> model.eval() |
|
``` |
|
|
|
To load and preprocess the image: |
|
|
|
```py |
|
>>> import urllib |
|
>>> from PIL import Image |
|
>>> from timm.data import resolve_data_config |
|
>>> from timm.data.transforms_factory import create_transform |
|
|
|
>>> config = resolve_data_config({}, model=model) |
|
>>> transform = create_transform(**config) |
|
|
|
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") |
|
>>> urllib.request.urlretrieve(url, filename) |
|
>>> img = Image.open(filename).convert('RGB') |
|
>>> tensor = transform(img).unsqueeze(0) |
|
``` |
|
|
|
To get the model predictions: |
|
|
|
```py |
|
>>> import torch |
|
>>> with torch.no_grad(): |
|
... out = model(tensor) |
|
>>> probabilities = torch.nn.functional.softmax(out[0], dim=0) |
|
>>> print(probabilities.shape) |
|
>>> |
|
``` |
|
|
|
To get the top-5 predictions class names: |
|
|
|
```py |
|
>>> |
|
>>> url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt") |
|
>>> urllib.request.urlretrieve(url, filename) |
|
>>> with open("imagenet_classes.txt", "r") as f: |
|
... categories = [s.strip() for s in f.readlines()] |
|
|
|
>>> |
|
>>> top5_prob, top5_catid = torch.topk(probabilities, 5) |
|
>>> for i in range(top5_prob.size(0)): |
|
... print(categories[top5_catid[i]], top5_prob[i].item()) |
|
>>> |
|
>>> |
|
``` |
|
|
|
Replace the model name with the variant you want to use, e.g. `ecaresnet101d`. You can find the IDs in the model summaries at the top of this page. |
|
|
|
To extract image features with this model, follow the [timm feature extraction examples](../feature_extraction), just change the name of the model you want to use. |
|
|
|
## How do I finetune this model? |
|
|
|
You can finetune any of the pre-trained models just by changing the classifier (the last layer). |
|
|
|
```py |
|
>>> model = timm.create_model('ecaresnet101d', pretrained=True, num_classes=NUM_FINETUNE_CLASSES) |
|
``` |
|
To finetune on your own dataset, you have to write a training loop or adapt [timm's training |
|
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset. |
|
|
|
## How do I train this model? |
|
|
|
You can follow the [timm recipe scripts](../scripts) for training a new model afresh. |
|
|
|
## Citation |
|
|
|
```BibTeX |
|
@misc{wang2020ecanet, |
|
title={ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks}, |
|
author={Qilong Wang and Banggu Wu and Pengfei Zhu and Peihua Li and Wangmeng Zuo and Qinghua Hu}, |
|
year={2020}, |
|
eprint={1910.03151}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV} |
|
} |
|
``` |
|
|
|
<!-- |
|
Type: model-index |
|
Collections: |
|
- Name: ECAResNet |
|
Paper: |
|
Title: 'ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks' |
|
URL: https://paperswithcode.com/paper/eca-net-efficient-channel-attention-for-deep |
|
Models: |
|
- Name: ecaresnet101d |
|
In Collection: ECAResNet |
|
Metadata: |
|
FLOPs: 10377193728 |
|
Parameters: 44570000 |
|
File Size: 178815067 |
|
Architecture: |
|
- 1x1 Convolution |
|
- Batch Normalization |
|
- Bottleneck Residual Block |
|
- Convolution |
|
- Efficient Channel Attention |
|
- Global Average Pooling |
|
- Max Pooling |
|
- ReLU |
|
- Residual Block |
|
- Residual Connection |
|
- Softmax |
|
- Squeeze-and-Excitation Block |
|
Tasks: |
|
- Image Classification |
|
Training Techniques: |
|
- SGD with Momentum |
|
- Weight Decay |
|
Training Data: |
|
- ImageNet |
|
Training Resources: 4x RTX 2080Ti GPUs |
|
ID: ecaresnet101d |
|
LR: 0.1 |
|
Epochs: 100 |
|
Layers: 101 |
|
Crop Pct: '0.875' |
|
Batch Size: 256 |
|
Image Size: '224' |
|
Weight Decay: 0.0001 |
|
Interpolation: bicubic |
|
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1087 |
|
Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNet101D_281c5844.pth |
|
Results: |
|
- Task: Image Classification |
|
Dataset: ImageNet |
|
Metrics: |
|
Top 1 Accuracy: 82.18% |
|
Top 5 Accuracy: 96.06% |
|
- Name: ecaresnet101d_pruned |
|
In Collection: ECAResNet |
|
Metadata: |
|
FLOPs: 4463972081 |
|
Parameters: 24880000 |
|
File Size: 99852736 |
|
Architecture: |
|
- 1x1 Convolution |
|
- Batch Normalization |
|
- Bottleneck Residual Block |
|
- Convolution |
|
- Efficient Channel Attention |
|
- Global Average Pooling |
|
- Max Pooling |
|
- ReLU |
|
- Residual Block |
|
- Residual Connection |
|
- Softmax |
|
- Squeeze-and-Excitation Block |
|
Tasks: |
|
- Image Classification |
|
Training Techniques: |
|
- SGD with Momentum |
|
- Weight Decay |
|
Training Data: |
|
- ImageNet |
|
ID: ecaresnet101d_pruned |
|
Layers: 101 |
|
Crop Pct: '0.875' |
|
Image Size: '224' |
|
Interpolation: bicubic |
|
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1097 |
|
Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45610/outputs/ECAResNet101D_P_75a3370e.pth |
|
Results: |
|
- Task: Image Classification |
|
Dataset: ImageNet |
|
Metrics: |
|
Top 1 Accuracy: 80.82% |
|
Top 5 Accuracy: 95.64% |
|
- Name: ecaresnet50d |
|
In Collection: ECAResNet |
|
Metadata: |
|
FLOPs: 5591090432 |
|
Parameters: 25580000 |
|
File Size: 102579290 |
|
Architecture: |
|
- 1x1 Convolution |
|
- Batch Normalization |
|
- Bottleneck Residual Block |
|
- Convolution |
|
- Efficient Channel Attention |
|
- Global Average Pooling |
|
- Max Pooling |
|
- ReLU |
|
- Residual Block |
|
- Residual Connection |
|
- Softmax |
|
- Squeeze-and-Excitation Block |
|
Tasks: |
|
- Image Classification |
|
Training Techniques: |
|
- SGD with Momentum |
|
- Weight Decay |
|
Training Data: |
|
- ImageNet |
|
Training Resources: 4x RTX 2080Ti GPUs |
|
ID: ecaresnet50d |
|
LR: 0.1 |
|
Epochs: 100 |
|
Layers: 50 |
|
Crop Pct: '0.875' |
|
Batch Size: 256 |
|
Image Size: '224' |
|
Weight Decay: 0.0001 |
|
Interpolation: bicubic |
|
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1045 |
|
Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNet50D_833caf58.pth |
|
Results: |
|
- Task: Image Classification |
|
Dataset: ImageNet |
|
Metrics: |
|
Top 1 Accuracy: 80.61% |
|
Top 5 Accuracy: 95.31% |
|
- Name: ecaresnet50d_pruned |
|
In Collection: ECAResNet |
|
Metadata: |
|
FLOPs: 3250730657 |
|
Parameters: 19940000 |
|
File Size: 79990436 |
|
Architecture: |
|
- 1x1 Convolution |
|
- Batch Normalization |
|
- Bottleneck Residual Block |
|
- Convolution |
|
- Efficient Channel Attention |
|
- Global Average Pooling |
|
- Max Pooling |
|
- ReLU |
|
- Residual Block |
|
- Residual Connection |
|
- Softmax |
|
- Squeeze-and-Excitation Block |
|
Tasks: |
|
- Image Classification |
|
Training Techniques: |
|
- SGD with Momentum |
|
- Weight Decay |
|
Training Data: |
|
- ImageNet |
|
ID: ecaresnet50d_pruned |
|
Layers: 50 |
|
Crop Pct: '0.875' |
|
Image Size: '224' |
|
Interpolation: bicubic |
|
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1055 |
|
Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45899/outputs/ECAResNet50D_P_9c67f710.pth |
|
Results: |
|
- Task: Image Classification |
|
Dataset: ImageNet |
|
Metrics: |
|
Top 1 Accuracy: 79.71% |
|
Top 5 Accuracy: 94.88% |
|
- Name: ecaresnetlight |
|
In Collection: ECAResNet |
|
Metadata: |
|
FLOPs: 5276118784 |
|
Parameters: 30160000 |
|
File Size: 120956612 |
|
Architecture: |
|
- 1x1 Convolution |
|
- Batch Normalization |
|
- Bottleneck Residual Block |
|
- Convolution |
|
- Efficient Channel Attention |
|
- Global Average Pooling |
|
- Max Pooling |
|
- ReLU |
|
- Residual Block |
|
- Residual Connection |
|
- Softmax |
|
- Squeeze-and-Excitation Block |
|
Tasks: |
|
- Image Classification |
|
Training Techniques: |
|
- SGD with Momentum |
|
- Weight Decay |
|
Training Data: |
|
- ImageNet |
|
ID: ecaresnetlight |
|
Crop Pct: '0.875' |
|
Image Size: '224' |
|
Interpolation: bicubic |
|
Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1077 |
|
Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNetLight_4f34b35b.pth |
|
Results: |
|
- Task: Image Classification |
|
Dataset: ImageNet |
|
Metrics: |
|
Top 1 Accuracy: 80.46% |
|
Top 5 Accuracy: 95.25% |
|
--> |