Update about.md
Browse files
about.md
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
-
Trained with 1 GPU H800 Server from AutoDL on 2025.2.3
|
2 |
|
3 |
Basic Model uses CNN with accuracy of 75% on test data (80.7 MB)
|
4 |
|
@@ -8,8 +8,9 @@ V2 Engine uses ViT with accuracy of at most 40% Keyboard Interrupted 2025.2.3 15
|
|
8 |
|
9 |
V3 Engine uses Hybrid Model( Combination of Convolutional layers and a Multi-Layer Perceptron (MLP)) with accuracy 68.65% on test data. (34.3 MB)
|
10 |
|
11 |
-
Trained 2025.2.4
|
12 |
-
|
|
|
13 |
Bottleneck Blocks: We can use bottleneck blocks (1x1 conv before and after 3x3 conv) to reduce computation, and increase depth.
|
14 |
Residual Connections: Implement residual connections to ease training in the very deep network and to help avoid vanishing gradients.
|
15 |
Increased Filters: Use more filters in the layers to increase the learning capacity.
|
@@ -20,6 +21,8 @@ The technology used in this solution combines EfficientNet-B0 as the base model,
|
|
20 |
After training and optimization, the final quantized model achieves a compact size of 16.6 MB, making it highly efficient for deployment.
|
21 |
On the test dataset, the model delivers a strong final accuracy of 93.78%, demonstrating its effectiveness in jersey number detection while meeting strict size constraints.
|
22 |
|
|
|
|
|
23 |
E2 Engine: 94.6% Accuracy on test data
|
24 |
E2 technology represents an advanced iteration of E1, focusing on enhanced efficiency, security, and scalability.
|
25 |
While E1 laid the foundational groundwork by optimizing basic system processes and improving task automation, E2 takes a step further by integrating more sophisticated encryption protocols, leveraging machine learning for predictive performance, and streamlining resource allocation.
|
|
|
1 |
+
Trained with 1 GPU H800 Server from AutoDL on 2025.2.3 UTC+8 with Pytroch and converted to .h5 format at the same time.
|
2 |
|
3 |
Basic Model uses CNN with accuracy of 75% on test data (80.7 MB)
|
4 |
|
|
|
8 |
|
9 |
V3 Engine uses Hybrid Model( Combination of Convolutional layers and a Multi-Layer Perceptron (MLP)) with accuracy 68.65% on test data. (34.3 MB)
|
10 |
|
11 |
+
Trained 2025.2.4 UTC+8 with H800
|
12 |
+
|
13 |
+
V4 Engine based of V1 but improve with more Convolutional Layers.
|
14 |
Bottleneck Blocks: We can use bottleneck blocks (1x1 conv before and after 3x3 conv) to reduce computation, and increase depth.
|
15 |
Residual Connections: Implement residual connections to ease training in the very deep network and to help avoid vanishing gradients.
|
16 |
Increased Filters: Use more filters in the layers to increase the learning capacity.
|
|
|
21 |
After training and optimization, the final quantized model achieves a compact size of 16.6 MB, making it highly efficient for deployment.
|
22 |
On the test dataset, the model delivers a strong final accuracy of 93.78%, demonstrating its effectiveness in jersey number detection while meeting strict size constraints.
|
23 |
|
24 |
+
Trained 2025.2.5 UTC+8 with H800
|
25 |
+
|
26 |
E2 Engine: 94.6% Accuracy on test data
|
27 |
E2 technology represents an advanced iteration of E1, focusing on enhanced efficiency, security, and scalability.
|
28 |
While E1 laid the foundational groundwork by optimizing basic system processes and improving task automation, E2 takes a step further by integrating more sophisticated encryption protocols, leveraging machine learning for predictive performance, and streamlining resource allocation.
|