Dennis Jonathan commited on
Commit
060fad1
·
1 Parent(s): 7b83bbd

Botched the README

Browse files
Files changed (1) hide show
  1. README.md +130 -14
README.md CHANGED
@@ -1,19 +1,135 @@
1
- This directory includes a few sample datasets to get you started.
 
 
 
 
 
 
 
 
 
 
 
2
 
3
- * `california_housing_data*.csv` is California housing data from the 1990 US
4
- Census; more information is available at:
5
- https://developers.google.com/machine-learning/crash-course/california-housing-data-description
6
 
7
- * `mnist_*.csv` is a small sample of the
8
- [MNIST database](https://en.wikipedia.org/wiki/MNIST_database), which is
9
- described at: http://yann.lecun.com/exdb/mnist/
10
 
11
- * `anscombe.json` contains a copy of
12
- [Anscombe's quartet](https://en.wikipedia.org/wiki/Anscombe%27s_quartet); it
13
- was originally described in
14
 
15
- Anscombe, F. J. (1973). 'Graphs in Statistical Analysis'. American
16
- Statistician. 27 (1): 17-21. JSTOR 2682899.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
- and our copy was prepared by the
19
- [vega_datasets library](https://github.com/altair-viz/vega_datasets/blob/4f67bdaad10f45e3549984e17e1b3088c731503d/vega_datasets/_data/anscombe.json).
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/efficientnet-b2
4
+ metrics:
5
+ - accuracy
6
+ pipeline_tag: image-classification
7
+ tags:
8
+ - biology
9
+ - efficientnet-b2
10
+ - image-classification
11
+ - vision
12
+ ---
13
 
14
+ # Bird Classifier EfficientNet-B2
 
 
15
 
16
+ ## Model Description
 
 
17
 
18
+ Have you look at a bird and said "Woahh if only I know what bird that is".
19
+ Unless you're an avid bird spotter (or just love birds in general), it's hard to differentiate some species of birds.
20
+ Well you're in luck, turns out you can use a image classifier to identify bird species!
21
 
22
+ This model is a fine-tuned version of [google/efficientnet-b2](https://huggingface.co/google/efficientnet-b2)
23
+ on the [gpiosenka/100-bird-species](https://www.kaggle.com/datasets/gpiosenka/100-bird-species) dataset available on Kaggle.
24
+ The dataset used to train the model was taken on September 24th, 2023.
25
+
26
+ The original model itself was trained on ImageNet-1K, thus it might still have some useful features for identifying creatures like birds.
27
+
28
+ In theory, the accuracy for a random guess on this dataset is 0.0019047619 (essentially 1/525).
29
+ The model performed significantly well on all three sets with result being:
30
+
31
+ - **Training**: 0.999480
32
+ - **Validation**: 0.985904
33
+ - **Test**: 0.991238
34
+
35
+ ## Intended Uses
36
+
37
+ You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for
38
+ fine-tuned versions on a task that interests you.
39
+
40
+ Here is an example of the model in action using a picture of a bird
41
+
42
+ ```python
43
+ # Importing the libraries needed
44
+ import torch
45
+ import urllib.request
46
+ from PIL import Image
47
+ from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification
48
+
49
+ # Determining the file URL
50
+ url = 'some url'
51
+
52
+ # Opening the image using PIL
53
+ img = Image.open(urllib.request.urlretrieve(url)[0])
54
+
55
+ # Loading the model and preprocessor from HuggingFace
56
+ preprocessor = EfficientNetImageProcessor.from_pretrained("dennisjooo/Birds-Classifier-EfficientNetB2")
57
+ model = EfficientNetForImageClassification.from_pretrained("dennisjooo/Birds-Classifier-EfficientNetB2")
58
+
59
+ # Preprocessing the input
60
+ inputs = preprocessor(img, return_tensors="pt")
61
+
62
+ # Running the inference
63
+ with torch.no_grad():
64
+ logits = model(**inputs).logits
65
+
66
+ # Getting the predicted label
67
+ predicted_label = logits.argmax(-1).item()
68
+ print(model.config.id2label[predicted_label])
69
+ ```
70
+
71
+ Or alternatively you can streamline it using Huggingface's Pipeline
72
+
73
+ ```python
74
+ # Importing the libraries needed
75
+ import torch
76
+ import urllib.request
77
+ from PIL import Image
78
+ from transformers import pipeline
79
+
80
+ # Determining the file URL
81
+ url = 'some url'
82
+
83
+ # Opening the image using PIL
84
+ img = Image.open(urllib.request.urlretrieve(url)[0])
85
+
86
+ # Loading the model and preprocessor using Pipeline
87
+ pipe = pipeline("image-classification", model="dennisjooo/Birds-Classifier-EfficientNetB2")
88
+
89
+ # Running the inference
90
+ result = pipe(img)[0]
91
+
92
+ # Printing the result label
93
+ print(result['label'])
94
+ ```
95
+
96
+ ## Training and Evaluation
97
+
98
+ ### Data
99
+
100
+ The dataset was taken from [gpiosenka/100-bird-species](https://www.kaggle.com/datasets/gpiosenka/100-bird-species) on Kaggle.
101
+ It contains a set of 525 bird species, with 84,635 training images, 2,625 each for validation and test images.
102
+ Every image in the dataset is a 224 by 224 RGB image.
103
+
104
+ The training process used the same split provided by the author.
105
+ For more details, please refer to the [author's Kaggle page](https://www.kaggle.com/datasets/gpiosenka/100-bird-species).
106
+
107
+ ### Training Procedure
108
+
109
+ The training was done using PyTorch on Kaggle's free P100 GPU. The process also includes the usage of Lightning and Torchmetrics libraries.
110
+
111
+ ### Preprocessing
112
+ Each image is preprocessed according to the the orginal author's [config](https://huggingface.co/google/efficientnet-b2/blob/main/preprocessor_config.json).
113
+
114
+ The training set was also augmented using:
115
+
116
+ - Random rotation of 10 degrees with probability of 50%
117
+ - Random horizontal flipping with probability of 50%
118
+
119
+ ### Training Hyperparameters
120
+
121
+ The following are the hyperparameters used for training:
122
+
123
+ - **Training regime:** fp32
124
+ - **Optimizer**: Adam with default betas
125
+ - **Learning rate**: 1e-3
126
+ - **Learning rate scheduler**: Reduce on plateau which monitors validation loss with patience of 2 and decay rate of 0.1
127
+ - **Batch size**: 64
128
+ - **Early stopping**: Monitors validation accuracy with patience of 10
129
+
130
+ ### Results
131
+
132
+ The image below is the result of the training process both on the training and validation set:
133
+
134
+ ![Training results](https://drive.google.com/uc?export=view&id=1cf1gPGiP9ItoFDGcyrXxC7OHdxTYkYlM)
135