MambaVision-T-1K / README.md
ahatamiz's picture
Update README.md
fd2550f verified
|
raw
history blame
2.71 kB
---
license: other
license_name: nvclv1
license_link: LICENSE
datasets:
- ILSVRC/imagenet-1k
pipeline_tag: image-classification
---
[**MambaVision: A Hybrid Mamba-Transformer Vision Backbone**](https://arxiv.org/abs/2407.08083).
### Model Overview
We introduce a novel mixer block by creating a symmetric path without SSM to enhance the modeling of global context. MambaVision has a hierarchical architecture that employs both self-attention and mixer blocks.
### Model Performance
MambaVision demonstrates a strong performance by achieving a new SOTA Pareto-front in
terms of Top-1 accuracy and throughput.
<p align="center">
<img src="https://github.com/NVlabs/MambaVision/assets/26806394/79dcf841-3966-4b77-883d-76cd5e1d4320" width=42% height=42%
class="center">
</p>
### Model Usage
You must first login into HuggingFace to pull the model:
```Bash
huggingface-cli login
```
It is highly recommended to install the requirements for MambaVision by running the following:
```Bash
pip install mambavision
```
For each model, we offer two variants for image classification and feature extraction that can be imported with 1 line of code.
The model can be simply imported according to:
```Python
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-T-1K", trust_remote_code=True)
```
The model outputs logits when an image is passed. If label is additionally provided, cross entropy loss between the output prediction and label is computed.
The following demonstrates a minimal example of how to use the model:
```Python
from transformers import AutoModelForImageClassification
from PIL import Image
import requests
import torch
import timm
# import mambavision model
model = AutoModelForImageClassification.from_pretrained("nvidia/MambaVision-T-1K", trust_remote_code=True)
# eval mode for inference
model.eval()
# prepare image for the model
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
# define a transform
transforms = timm.data.create_transform((3, 224, 224))
image = transforms(image).unsqueeze(0)
# put both model and image on cuda
model = model.cuda()
image = image.cuda()
# forward pass
outputs = model(image)
# You can then extract the predicted probabilities by applying softmax:
probabilities = torch.nn.functional.softmax(outputs['logits'], dim=0)
# In order to find the top 5 predicted class indexes and their corresponding values:
values, indices = torch.topk(probabilities, 5)
```
### License:
[NVIDIA Source Code License-NC](https://huggingface.co/nvidia/MambaVision-T-1K/blob/main/LICENSE)