FBAGSTM commited on
Commit
c9b1792
·
verified ·
1 Parent(s): fe041a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +154 -6
README.md CHANGED
@@ -1,6 +1,154 @@
1
- ---
2
- license: other
3
- license_name: sla0044
4
- license_link: >-
5
- https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/LICENSE.md
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: sla0044
4
+ license_link: >-
5
+ https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/LICENSE.md
6
+ ---
7
+ # EfficientNet v2
8
+
9
+ ## **Use case** : `Image classification`
10
+
11
+ # Model description
12
+
13
+
14
+ EfficientNet v2 family is one of the best topology for image classification. It has been obtained through neural architecture search with a special care given to training time
15
+ and number of parameters reduction.
16
+
17
+ This family of networks comprises various subtypes: B0 (224x224), B1 (240x240), B2 (260x260), B3 (300x300), S (384x384) ranked by depth and width increasing order.
18
+ There are also M, L, XL variants but too large to be executed efficiently on STM32N6.
19
+
20
+ All these networks are already available on https://www.tensorflow.org/api_docs/python/tf/keras/applications/ pre-trained on ImageNet.
21
+
22
+
23
+ ## Network information
24
+
25
+
26
+ | Network Information | Value |
27
+ |---------------------|----------------------------------------------------------------------------------|
28
+ | Framework | TensorFlow Lite/ONNX quantizer |
29
+ | MParams type=B0 | 7.1 M |
30
+ | Quantization | int8 |
31
+ | Provenance | https://www.tensorflow.org/api_docs/python/tf/keras/applications/efficientnet_v2 |
32
+ | Paper | https://arxiv.org/pdf/2104.00298 |
33
+
34
+ The models are quantized using tensorflow lite converter or ONNX quantizer.
35
+
36
+
37
+ ## Network inputs / outputs
38
+
39
+
40
+ For an image resolution of NxM and P classes
41
+
42
+ | Input Shape | Description |
43
+ |---------------|---------------------------------------------------------------------|
44
+ | (1, N, M, 3) | Single NxM RGB image with UINT8 values between 0 and 255 for tflite |
45
+ | (1, 3, N, M) | Single NxM RGB image with INT8 values between -128 and 127 for ONNX |
46
+
47
+ | Output Shape | Description |
48
+ | ----- |----------------------------------------------------------|
49
+ | (1, P) | Per-class confidence for P classes in FLOAT32 for tflite |
50
+ | (1, P) | Per-class confidence for P classes in FLOAT32 for ONNX |
51
+
52
+
53
+ ## Recommended platforms
54
+
55
+
56
+ | Platform | Supported | Recommended |
57
+ |-----------|-----------|-------------|
58
+ | STM32L0 |[]| [] |
59
+ | STM32L4 |[]| [] |
60
+ | STM32U5 |[]| [] |
61
+ | STM32H7 |[]| [] |
62
+ | STM32MP1 |[x]| [x] |
63
+ | STM32MP2 |[x]| [x] |
64
+ | STM32N6 |[x]| [x] |
65
+
66
+
67
+ # Performances
68
+
69
+ ## Metrics
70
+
71
+ Measures are done with default STM32Cube.AI configuration with enabled input / output allocated option.
72
+
73
+
74
+ ### Reference **NPU** memory footprint on food-101 and ImageNet dataset (see Accuracy for details on dataset)
75
+ |Model | Dataset | Format | Resolution | Series | Internal RAM (KiB) | External RAM (KiB) | Weights Flash (KiB) | STM32Cube.AI version | STEdgeAI Core version |
76
+ |----------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
77
+ | [efficientnet_v2B0_224_fft onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2B0_224_fft/efficientnet_v2B0_224_fft_qdq_int8.onnx) | food-101 | Int8 | 224x224x3 | STM32N6 | 1834.44 |0.0| 7553.77 | 10.0.0 | 2.0.0 |
78
+ | [efficientnet_v2B1_240_fft onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2B1_240_fft/efficientnet_v2B1_240_fft_qdq_int8.onnx) | food-101 | Int8 | 240x240x3 | STM32N6 | 2589.97 |0.0| 8924.78 | 10.0.0 | 2.0.0 |
79
+ | [efficientnet_v2B2_260_fft onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2B2_260_fft/efficientnet_v2B2_260_fft_qdq_int8.onnx) | food-101 | Int8 | 260x260x3 | STM32N6 | 2629.56 |528.12| 11212.75| 10.0.0 | 2.0.0 |
80
+ | [efficientnet_v2S_384_fft onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2S_384_fft/efficientnet_v2S_384_fft_qdq_int8.onnx) | food-10 | Int8 | 384x384x3 | STM32N6 | 2700 | 6912 | 25756.92 | 10.0.0 | 2.0.0 |
81
+ | [efficientnet_v2B0_224 onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2B0_224/efficientnet_v2B0_224_qdq_int8.onnx) | ImageNet | Int8 | 224x224x3 | STM32N6 | 1834.44 | 0.0 | 8680.39 | 10.0.0 | 2.0.0 |
82
+ | [efficientnet_v2B1_240 onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2B1_240/efficientnet_v2B1_240_qdq_int8.onnx) | ImageNet | Int8 | 240x240x3 | STM32N6 | 2589.97 | 0.0 | 10051.7 | 10.0.0 | 2.0.0 |
83
+ | [efficientnet_v2B2_260 onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2B2_260/efficientnet_v2B2_260_qdq_int8.onnx) | ImageNet | Int8 | 260x260x3 | STM32N6 | 2629.56 | 528.12 | 12451.77 | 10.0.0 | 2.0.0 |
84
+ | [efficientnet_v2S_384 onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2S_384/efficientnet_v2S_384_qdq_int8.onnx) | ImageNet | Int8 | 384x384x3 | STM32N6 | 2700 | 6912 | 26884.47 | 10.0.0 | 2.0.0 |
85
+
86
+
87
+ ### Reference **NPU** inference time on food-101 and ImageNet dataset (see Accuracy for details on dataset)
88
+ | Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STM32Cube.AI version | STEdgeAI Core version |
89
+ |--------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
90
+ | [efficientnet_v2B0_224_fft onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2B0_224_fft/efficientnet_v2B0_224_fft_qdq_int8.onnx) | food-101 | Int8 | 224x224x3 | STM32N6570-DK | NPU/MCU | 54.32 | 18.41 | 10.0.0 | 2.0.0 |
91
+ | [efficientnet_v2B1_240_fft onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2B1_240_fft/efficientnet_v2B1_240_fft_qdq_int8.onnx) | food-101 | Int8 | 240x240x3 | STM32N6570-DK | NPU/MCU | 73.89 | 13.53 | 10.0.0 | 2.0.0 |
92
+ | [efficientnet_v2B2_260_fft onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2B2_260_fft/efficientnet_v2B2_260_fft_qdq_int8.onnx) | food-101 | Int8 | 260x260x3 | STM32N6570-DK | NPU/MCU | 146.01 | 6.85 | 10.0.0 | 2.0.0 |
93
+ | [efficientnet_v2S_384_fft onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2S_384_fft/efficientnet_v2S_384_fft_qdq_int8.onnx) | food-101 | Int8 | 384x384x3 | STM32N6570-DK | NPU/MCU | 842 | 1.19 | 10.0.0 | 2.0.0 |
94
+ | [efficientnet_v2B0_224 onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2B0_224/efficientnet_v2B0_224_qdq_int8.onnx) | ImageNet | Int8 | 224x224x3 | STM32N6570-DK | NPU/MCU | 57.5 | 17.39 | 10.0.0 | 2.0.0 |
95
+ | [efficientnet_v2B1_240 onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2B1_240/efficientnet_v2B1_240_qdq_int8.onnx) | ImageNet | Int8 | 240x240x3 | STM32N6570-DK | NPU/MCU | 77.25 | 12.94 | 10.0.0 | 2.0.0 |
96
+ | [efficientnet_v2B2_260 onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2B2_260/efficientnet_v2B2_260_qdq_int8.onnx) | ImageNet | Int8 | 260x260x3 | STM32N6570-DK | NPU/MCU | 148.78 | 6.72 | 10.0.0 | 2.0.0 |
97
+ | [efficientnet_v2S_384 onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2S_384/efficientnet_v2S_384_qdq_int8.onnx) | ImageNet | Int8 | 384x384x3 | STM32N6570-DK | NPU/MCU | 809.73 | 1.23 | 10.0.0 | 2.0.0 |
98
+
99
+ * The deployment of all the models listed in the table is supported, except for the efficientnet_v2S_384 model, for which support is coming soon.
100
+ ### Accuracy with Food-101 dataset
101
+
102
+ Dataset details: [link](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/) , License [-](), Quotation[[3]](#3) , Number of classes: 101 , Number of images: 101 000
103
+
104
+ | Model | Format | Resolution | Top 1 Accuracy |
105
+ |--------------------------------------------------------------------------------------------------------------------------------------------------|--------|-----------|----------------|
106
+ | [efficientnet_v2B0_224_fft](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2B0_224_fft/efficientnet_v2B0_224_fft.h5) | Float | 224x224x3 | 81.35 % |
107
+ | [efficientnet_v2B0_224_fft onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2B0_224_fft/efficientnet_v2B0_224_fft_qdq_int8.onnx) | Int8 | 224x224x3 | 81.1 % |
108
+ | [efficientnet_v2B1_240_fft](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2B1_240_fft/efficientnet_v2B1_240_fft.h5) | Float | 240x240x3 | 83.23 % |
109
+ | [efficientnet_v2B1_240_fft onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2B1_240_fft/efficientnet_v2B1_240_fft_qdq_int8.onnx) | Int8 | 240x240x3 | 82.95 % |
110
+ | [efficientnet_v2B2_260_fft](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2B2_260_fft/efficientnet_v2B2_260_fft.h5) | Float | 260x260x3 | 84.37 % |
111
+ | [efficientnet_v2B2_260_fft onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2B2_260_fft/efficientnet_v2B2_260_fft_qdq_int8.onnx) | Int8 | 260x260x3 | 84.04 % |
112
+ | [efficientnet_v2S_384_fft](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2S_384_fft/efficientnet_v2S_384_fft.h5) | Float | 384x384x3 | 88.16 % |
113
+ | [efficientnet_v2S_384_fft onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/ST_pretrainedmodel_public_dataset/food-101/efficientnet_v2S_384_fft/efficientnet_v2S_384_fft_qdq_int8.onnx) | Int8 | 384x384x3 | 87.34 % |
114
+
115
+
116
+ ### Accuracy with ImageNet
117
+
118
+ Dataset details: [link](https://www.image-net.org), License: BSD-3-Clause, Quotation[[4]](#4)
119
+ Number of classes: 1000.
120
+ To perform the quantization, we calibrated the activations with a random subset of the training set.
121
+ For the sake of simplicity, the accuracy reported here was estimated on the 10000 labelled images of the validation set.
122
+
123
+ | Model | Format | Resolution | Top 1 Accuracy |
124
+ |------------------------------------------------------------------------------------------------------------------------------------------|--------|------------|----------------|
125
+ | [efficientnet_v2B0_224](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2B0_224/efficientnet_v2B0_224.h5) | Float | 224x224x3 | 73.94 % |
126
+ | [efficientnet_v2B0_224 onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2B0_224/efficientnet_v2B0_224_qdq_int8.onnx) | Int8 | 224x224x3 | 72.21 % |
127
+ | [efficientnet_v2B1_240](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2B1_240/efficientnet_v2B1_240.h5) | Float | 240x240x3 | 76.14 % |
128
+ | [efficientnet_v2B1_240 onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2B1_240/efficientnet_v2B1_240_qdq_int8.onnx) | Int8 | 240x240x3 | 75.5 % |
129
+ | [efficientnet_v2B2_260](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2B2_260/efficientnet_v2B2_260.h5) | Float | 260x260x3 | 76.58 % |
130
+ | [efficientnet_v2B2_260 onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2B2_260/efficientnet_v2B2_260_qdq_int8.onnx) | Int8 | 260x260x3 | 76.26 % |
131
+ | [efficientnet_v2S_384](./Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2S_384/efficientnet_v2S_384.h5) | Float | 384x384x3 | 83.52 % |
132
+ | [efficientnet_v2S_384 onnx](https://github.com/STMicroelectronics/stm32ai-modelzoo/image_classification/efficientnetv2/Public_pretrainedmodel_public_dataset/ImageNet/efficientnet_v2S_384/efficientnet_v2S_384_qdq_int8.onnx) | Int8 | 384x384x3 | 83.07 % |
133
+
134
+
135
+ ## Retraining and Integration in a simple example:
136
+
137
+ Please refer to the stm32ai-modelzoo-services GitHub [here](https://github.com/STMicroelectronics/stm32ai-modelzoo-services)
138
+
139
+
140
+ # References
141
+
142
+ <a id="1">[1]</a>
143
+ "Tf_flowers : tensorflow datasets," TensorFlow. [Online]. Available: https://www.tensorflow.org/datasets/catalog/tf_flowers.
144
+
145
+ <a id="2">[2]</a>
146
+ J, ARUN PANDIAN; GOPAL, GEETHARAMANI (2019), "Data for: Identification of Plant Leaf Diseases Using a 9-layer Deep Convolutional Neural Network", Mendeley Data, V1, doi: 10.17632/tywbtsjrjv.1
147
+
148
+ <a id="3">[3]</a>
149
+ L. Bossard, M. Guillaumin, and L. Van Gool, "Food-101 -- Mining Discriminative Components with Random Forests." European Conference on Computer Vision, 2014.
150
+
151
+ <a id="4">[4]</a>
152
+ Olga Russakovsky*, Jia Deng*, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei.
153
+ (* = equal contribution) ImageNet Large Scale Visual Recognition Challenge.
154
+