FredZhang7 commited on
Commit
cdbd71c
·
1 Parent(s): b716c3f

add to transformers

Browse files
Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -26,6 +26,12 @@ in terms of top-1 accuracy, efficiency, and robustness on my dataset and [CMAD b
26
 
27
  <br>
28
 
 
 
 
 
 
 
29
  ### Prepare Model for Training
30
  To change the number of classes, replace the linear classification layer.
31
  Here's an example of how to convert the architecture into a trainable model.
@@ -69,8 +75,8 @@ I finetuned the existing models on either 299x299, 304x304, 320x320, or 384x384
69
 
70
  `efficientnet_b3_pruned` achieved the second highest top-1 accuracy as well as the highest epoch-1 training accuracy on my task, out of EfficientNetV2.5 small and all existing EfficientNet models my 24 GB VRAM RTX 3090 could handle.
71
 
72
- I will publish the detailed report in another model repository, including the link to the GVNS benchmarks.
73
- This repository is only for the base model, pretrained on ImageNet, not my task.
74
 
75
  ### Carbon Emissions
76
- Comparing all models and testing my new architectures costed roughly 504 GPU hours, over a span of 27 days.
 
26
 
27
  <br>
28
 
29
+ ### Load 1000 Class PyTorch Jit Model
30
+ ```python
31
+ from transformers import AutoModel
32
+ model = AutoModel.from_pretrained("FredZhang7/efficientnetv2.5_rw_s", trust_remote_code=True)
33
+ ```
34
+
35
  ### Prepare Model for Training
36
  To change the number of classes, replace the linear classification layer.
37
  Here's an example of how to convert the architecture into a trainable model.
 
75
 
76
  `efficientnet_b3_pruned` achieved the second highest top-1 accuracy as well as the highest epoch-1 training accuracy on my task, out of EfficientNetV2.5 small and all existing EfficientNet models my 24 GB VRAM RTX 3090 could handle.
77
 
78
+ I will publish the detailed report in [this model repository](https://huggingface.co/aistrova/safesearch-v5.0).
79
+ This repository is only for the base model, pretrained a bit on ImageNet, not my task.
80
 
81
  ### Carbon Emissions
82
+ Comparing all models and testing my new architectures costed roughly 648 GPU hours, over a span of 35 days.