Mark Mealman
Dracones
AI & ML interests
None yet
Recent Activity
updated
a model
8 days ago
Dracones/QwQ-32B_exl2_4.0bpw
published
a model
8 days ago
Dracones/QwQ-32B_exl2_4.0bpw
updated
a model
8 days ago
Dracones/QwQ-32B_exl2_4.5bpw
Organizations
Dracones's activity
V0.1 at 4.0-4.25bpw?
1
#1 opened 25 days ago
by
Surprisekitty
Quant Request
1
#1 opened 27 days ago
by
John198
Can you produce a 2.4bpw quantization of this model?
2
#1 opened 3 months ago
by
xldistance
Perplexity
1
#1 opened 3 months ago
by
SekkSea
Fix for multiple graphics cards
4
#2 opened 7 months ago
by
Ataylorm
weights_only=True error
4
#4 opened 7 months ago
by
Dracones

feedback
22
#2 opened 10 months ago
by
Szarka
3.75 please?
2
#2 opened 11 months ago
by
jackboot

2.75 bpw high EQ bench
1
#1 opened 11 months ago
by
koesn

EXL2 Quants are up
2
#2 opened 11 months ago
by
Dracones

Could you help to create the 3.25bpw model so it can fits on A100?
4
#1 opened 11 months ago
by
davideuler
Can this version be loaded with vllm?
2
#1 opened 11 months ago
by
wawoshashi
A good tune
#3 opened 11 months ago
by
Dracones

Question about prompting and System prompt in Vicuna format
4
#10 opened 11 months ago
by
houmie
Update README.md
2
#1 opened 11 months ago
by
vvekthkr
Measurments
4
#1 opened 12 months ago
by
altomek

Any chance anyone is quantizing this into a 2.4bpw EXL2 version for those of us with a single 24GB video cards?
1
#30 opened about 1 year ago
by
clevnumb
3.5bpw request
4
#1 opened about 1 year ago
by
Gesard
Error
3
#1 opened about 1 year ago
by
Hardcore7651
EXL2 Quants
9
#2 opened about 1 year ago
by
Dracones
