Update README.md
Browse files
README.md
CHANGED
@@ -6,5 +6,50 @@ library_name: timm
|
|
6 |
license: apache-2.0
|
7 |
datasets:
|
8 |
- imagenet-1k
|
|
|
|
|
9 |
---
|
10 |
-
# Model card for hb_former_s18
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
license: apache-2.0
|
7 |
datasets:
|
8 |
- imagenet-1k
|
9 |
+
metrics:
|
10 |
+
- accuracy
|
11 |
---
|
12 |
+
# Model card for hb_former_s18
|
13 |
+
|
14 |
+
The model hb_former_s18 is part of the HyenaPixel model family proposed in the paper ["HyenaPixel: Global Image Context with Convolutions"](https://arxiv.org/abs/2402.19305).
|
15 |
+
HyenaPixel uses large convolutions as an attention replacement by extending Hyena ([Paper](https://arxiv.org/abs/2302.10866) and [GitHub](https://github.com/HazyResearch/safari/)) to support bidirectrional and two-dimensional input.
|
16 |
+
The operator is integrated in the MetaFormer ([Paper](https://arxiv.org/abs/2210.13452) and [GitHub](https://github.com/sail-sg/metaformer)) framework.
|
17 |
+
|
18 |
+
The official PyTorch implementation of HyenaPixel can be found on [GitHub](https://github.com/spravil/HyenaPixel).
|
19 |
+
|
20 |
+
## Models
|
21 |
+
|
22 |
+
| Model | Resolution | Params | Top1 Acc | Download |
|
23 |
+
| :----------------- | :--------: | :----: | :------: | :--------------------------------------------------------------------------: |
|
24 |
+
| hpx_former_s18 | 224 | 29M | 83.2 | [HuggingFace](https://huggingface.co/Spravil/hpx_former_s18.westai_in1k) |
|
25 |
+
| hpx_former_s18_384 | 384 | 29M | 84.7 | [HuggingFace](https://huggingface.co/Spravil/hpx_former_s18.westai_in1k_384) |
|
26 |
+
| hb_former_s18 | 224 | 28M | 83.5 | [HuggingFace](https://huggingface.co/Spravil/hb_former_s18.westai_in1k) |
|
27 |
+
| c_hpx_former_s18 | 224 | 28M | 83.0 | [HuggingFace](https://huggingface.co/Spravil/c_hpx_former_s18.westai_in1k) |
|
28 |
+
| hpx_a_former_s18 | 224 | 28M | 83.6 | [HuggingFace](https://huggingface.co/Spravil/hpx_a_former_s18.westai_in1k) |
|
29 |
+
| hb_a_former_s18 | 224 | 27M | 83.2 | [HuggingFace](https://huggingface.co/Spravil/hb_a_former_s18.westai_in1k) |
|
30 |
+
| hpx_former_b36 | 224 | 111M | 84.9 | [HuggingFace](https://huggingface.co/Spravil/hpx_former_b36.westai_in1k) |
|
31 |
+
| hb_former_b36 | 224 | 102M | 85.2 | [HuggingFace](https://huggingface.co/Spravil/hb_former_b36.westai_in1k) |
|
32 |
+
|
33 |
+
## Usage
|
34 |
+
|
35 |
+
```
|
36 |
+
pip install git+https://github.com/spravil/HyenaPixel.git
|
37 |
+
```
|
38 |
+
|
39 |
+
```python
|
40 |
+
import timm
|
41 |
+
import hyenapixel.models
|
42 |
+
|
43 |
+
model = timm.create_model("hb_former_s18", pretrained=True)
|
44 |
+
```
|
45 |
+
|
46 |
+
# Bibtex
|
47 |
+
|
48 |
+
```
|
49 |
+
@article{spravil2024hyenapixel,
|
50 |
+
title={HyenaPixel: Global Image Context with Convolutions},
|
51 |
+
author={Julian Spravil and Sebastian Houben and Sven Behnke},
|
52 |
+
journal={arXiv preprint arXiv:2402.19305},
|
53 |
+
year={2024},
|
54 |
+
}
|
55 |
+
```
|