zuppif commited on
Commit
b13a1f6
·
1 Parent(s): 8a7d27c

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - vision
5
+ - image-segmentation
6
+
7
+ datasets:
8
+ - imagenet-21k
9
+ - imagenet-1k
10
+
11
+ widget:
12
+ - src:https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
13
+ widget.title
14
+
15
+ ---
16
+
17
+ # ConvNext
18
+
19
+ ConvNext model trained on imagenet-21k. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) and first released in [this repository](https://github.com/facebookresearch/ConvNeXt).
20
+
21
+ Disclaimer: The team releasing ConvNext did not write a model card for this model so this model card has been written by the Hugging Face team.
22
+
23
+ ## Model description
24
+
25
+ weiiii
26
+
27
+ ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png)
28
+
29
+ ## Intended uses & limitations
30
+
31
+ You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for
32
+ fine-tuned versions on a task that interests you.
33
+
34
+ ### How to use
35
+
36
+ Here is how to use this model:
37
+
38
+ ```python
39
+ >>> from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification
40
+ >>> import torch
41
+ >>> from datasets import load_dataset
42
+
43
+ >>> dataset = load_dataset("huggingface/cats-image")
44
+ >>> image = dataset["test"]["image"][0]
45
+
46
+ >>> feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-tiny-224")
47
+ >>> model = ConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224")
48
+
49
+ >>> inputs = feature_extractor(image, return_tensors="pt")
50
+
51
+ >>> with torch.no_grad():
52
+ ... logits = model(**inputs).logits
53
+
54
+ >>> # model predicts one of the 1000 ImageNet classes
55
+ >>> predicted_label = logits.argmax(-1).item()
56
+ >>> print(model.config.id2label[predicted_label])
57
+ 'tabby, tabby cat'
58
+ ```
59
+
60
+
61
+
62
+ For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext).