iammartian0 commited on
Commit
9b342e6
·
1 Parent(s): ab8aeb8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -1
README.md CHANGED
@@ -7,4 +7,74 @@ metrics:
7
  pipeline_tag: image-classification
8
  tags:
9
  - climate
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  pipeline_tag: image-classification
8
  tags:
9
  - climate
10
+ ---
11
+
12
+ ## Model description
13
+
14
+ This is a transformers based image classification model, implemented using the technique of transfer learning.
15
+ The pretrained model is [Vision transformer](https://huggingface.co/google/vit-base-patch16-224) trained on Imagenet-21k.
16
+
17
+ ## Datasets
18
+
19
+ The dataset used is downloaded from git repo [Agri-Hub/Space2Ground](https://github.com/Agri-Hub/Space2Ground/tree/main).
20
+ I used Street-level image patches folder for this model. It is a dataset containing cropped vegetation parts of
21
+ mapillary street-level images. Further details are on the linked git repo.
22
+
23
+ ### How to use
24
+
25
+ You can use this model directly with help of pipeline class from transformers library of hugging face
26
+
27
+ ```python
28
+
29
+ >>>from transformers import pipeline
30
+ >>>classifier = pipeline("image-classification", model="iammartian0/vegetation_classification_model")
31
+ >>>classifier(image)
32
+
33
+ ```
34
+
35
+ ## Training procedure
36
+
37
+
38
+
39
+ ### Preprocessing
40
+
41
+ Assigining labels based on parent folder names
42
+
43
+ ### Image Transformations
44
+
45
+ Applied RandomResizedCrop from torchvision.transforms to all the training images.
46
+
47
+ ### Finetuning
48
+
49
+ Model is finetuned on the dataset for four epochs
50
+
51
+ ## Evaluation results
52
+
53
+
54
+
55
+
56
+ ### BibTeX entry and citation info
57
+
58
+ ```bibtex
59
+ @article{DBLP:journals/corr/abs-1810-04805,
60
+ author = {Jacob Devlin and
61
+ Ming{-}Wei Chang and
62
+ Kenton Lee and
63
+ Kristina Toutanova},
64
+ title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
65
+ Understanding},
66
+ journal = {CoRR},
67
+ volume = {abs/1810.04805},
68
+ year = {2018},
69
+ url = {http://arxiv.org/abs/1810.04805},
70
+ archivePrefix = {arXiv},
71
+ eprint = {1810.04805},
72
+ timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
73
+ biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
74
+ bibsource = {dblp computer science bibliography, https://dblp.org}
75
+ }
76
+ ```
77
+
78
+ <a href="https://huggingface.co/exbert/?model=bert-base-cased">
79
+ <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
80
+ </a>