nprasad24 commited on
Commit
6d12bcf
·
verified ·
1 Parent(s): 4820864

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -26,7 +26,13 @@ It achieves the following results on the evaluation set:
26
 
27
  ## Model description
28
 
29
- More information needed
 
 
 
 
 
 
30
 
31
  ## Intended uses & limitations
32
 
 
26
 
27
  ## Model description
28
 
29
+ The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.
30
+
31
+ Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
32
+
33
+ Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
34
+
35
+ By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
36
 
37
  ## Intended uses & limitations
38