LagPixelLOL
v2ray
AI & ML interests
Looking for compute sponsors, please contact me through my email [email protected]!
Recent Activity
updated
a model
about 1 hour ago
v2ray/GPT4chan-8B-AWQ
updated
a model
about 1 hour ago
v2ray/GPT4chan-8B-FP8
updated
a model
about 1 hour ago
v2ray/GPT4chan-8B
Organizations
v2ray's activity
vllm support a100
10
#2 opened 22 days ago
by
HuggingLianWang
Deployment framework
8
#2 opened 12 days ago
by
xro7
Smaller deepseek models?
6
#1 opened 14 days ago
by
loshka2
hello v2ray
2
#1 opened 4 months ago
by
leosefcik
Why 12b? Who could run that locally?
47
#1 opened 6 months ago
by
kaidu88
Usage?
1
#1 opened 6 months ago
by
colourspec
[bot] Conversion to Parquet
#1 opened 8 months ago
by
parquet-converter
[AUTOMATED] Model Memory Requirements
#1 opened 8 months ago
by
model-sizer-bot
Support for Diffusers?
16
#5 opened 8 months ago
by
tintwotin
How to convert to HF format?
5
#6 opened 8 months ago
by
ddh0
How much VRAM did you use?
3
#2 opened 9 months ago
by
ShukantP
conversion to HF
18
#1 opened 9 months ago
by
ehartford
Are these files exactly the same as those in the `mistral-community` repo?
5
#1 opened 10 months ago
by
jukofyork
Fixed chat template.
3
#4 opened 10 months ago
by
v2ray
Added chat template.
#3 opened 10 months ago
by
v2ray
Update convert.py
#6 opened 10 months ago
by
bullerwins
Quantized models in GGUF
4
#4 opened 10 months ago
by
MaziyarPanahi
mixtral 8x7B Instruct v2.0
1
#3 opened 10 months ago
by
bayraktaroglu
Good show!
1
#2 opened 10 months ago
by
ehartford