Added Smooth-GmP - best Long-CLIP fine-tune yet! ✨
Browse files
README.md
CHANGED
@@ -3,6 +3,16 @@ datasets:
|
|
3 |
- SPRIGHT-T2I/spright_coco
|
4 |
---
|
5 |
## A fine-tune of [BeichenZhang/LongCLIP-L](https://huggingface.co/BeichenZhang/LongCLIP-L) -- Long-CLIP ViT-L/14 expanded to 248 tokens.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
|
7 |
The fine-tune has an improved ImageNet/ObjectNet accuracy of 0.89 (original Long-CLIP by the authors:~0.81)**.
|
8 |
|
|
|
3 |
- SPRIGHT-T2I/spright_coco
|
4 |
---
|
5 |
## A fine-tune of [BeichenZhang/LongCLIP-L](https://huggingface.co/BeichenZhang/LongCLIP-L) -- Long-CLIP ViT-L/14 expanded to 248 tokens.
|
6 |
+
----
|
7 |
+
## Update 12/AUG/2024:
|
8 |
+
New *BEST* model, custom loss with label smoothing.
|
9 |
+
Small gain for a diverse and large good quality dataset, but big relative gains for an overfit-prone fine-tune (small batch size, 1 GPU, narrow dataset of e.g. 'sneakers', etc.) are possible!
|
10 |
+
Fine-tune your model with the provided code for GmP-Smooth: [https://github.com/zer0int/Long-CLIP](https://github.com/zer0int/Long-CLIP)
|
11 |
+
|
12 |
+
|
13 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6490359a877fc29cb1b09451/l3FYkaicihqXv5D9wLDAF.png)
|
14 |
+
|
15 |
+
----
|
16 |
|
17 |
The fine-tune has an improved ImageNet/ObjectNet accuracy of 0.89 (original Long-CLIP by the authors:~0.81)**.
|
18 |
|