Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Qwen
/
Qwen2.5-1.5B-Instruct-GPTQ-Int4
like
0
Follow
Qwen
1,627
Text Generation
Transformers
Safetensors
English
qwen2
chat
conversational
text-generation-inference
Inference Endpoints
4-bit precision
gptq
arxiv:
2407.10671
License:
apache-2.0
Model card
Files
Files and versions
Community
2
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (1)
Why does this model take up more memory than the 17B one
#2 opened 22 days ago by
hhgz