metadata
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision and first released in this repository.
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
Model description
The authors trained RegNets models in a self-supervised fashion on a billion uncurated Instagram images. This model is later fine-tuned on ImageNet.
Intended uses & limitations
You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you.
How to use
Here is how to use this model:
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-320-seer-in1k")
>>> model = RegNetForImageClassification.from_pretrained("facebook/regnet-y-320-seer-in1k")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
For more code examples, we refer to the documentation.