zer0int
zer0int
AI & ML interests
I 💙 CLIP ~ 🤓🫶🤖
Recent Activity
liked
a model
5 days ago
microsoft/LLM2CLIP-Openai-L-14-224
liked
a model
5 days ago
microsoft/LLM2CLIP-Llama-3-8B-Instruct-CC-Finetuned
updated
a model
7 days ago
zer0int/LongCLIP-SAE-ViT-L-14
Organizations
zer0int's activity
is this or the my-gmp one better for SD 3.5 large?
2
#1 opened 14 days ago
by
mimizukari
Recommended for Stable Diffusion 3.5 Large?
1
#12 opened 14 days ago
by
mimizukari
Are the long (248) vs short (77) text enhancing models similar quality for great text? and just different token length compatability?
3
#11 opened 21 days ago
by
aydin99
Is it okay to use for Flux Lora training?
3
#8 opened 3 months ago
by
WilsonModt
Difference between 300 MB and 900 MB versions?
1
#10 opened 30 days ago
by
Geralt28
i can't really make sense why some models work and some models just will not work at all.
5
#5 opened about 1 month ago
by
kellempxt
What text can be generated?
2
#9 opened about 2 months ago
by
CASIDIO
Is this only the ContrastiveLoss finetuning? Did you use the Coarse-grained alignment loss proposed in LongClip?
1
#4 opened about 2 months ago
by
cuifeng
AssertionError: You do not have CLIP state dict!
5
#2 opened 4 months ago
by
PixelClassisist
Works with GGUF?
4
#7 opened 3 months ago
by
PuReEnErGy84
Instructions for deployment/use?
5
#4 opened 4 months ago
by
sanctimon
which file to use with flux
1
#6 opened 3 months ago
by
Ai11Ali
Diffusors missing config.json file
13
#3 opened 4 months ago
by
jspaun
nice job!
4
#5 opened 3 months ago
by
vladmandic
SDXL usage
4
#1 opened 4 months ago
by
apiasecki
I get the exact same results as clip-L
2
#2 opened 4 months ago
by
stduhpf
In what folder i should put it?
3
#1 opened 4 months ago
by
ZeroCool22