Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Edit Models filters
Tasks
Libraries
Datasets
Languages
Licenses
Other
1
Inference status
Reset Inference status
Warm
Cold
Frozen
Misc
Reset Misc
gptqmodel
4-bit precision
AutoTrain Compatible
Inference Endpoints
text-generation-inference
custom_code
Misc with no match
Eval Results
Merge
text-embeddings-inference
8-bit precision
Carbon Emissions
Mixture of Experts
Apply filters
Models
17
Full-text search
Edit filters
Sort: Trending
Active filters:
gptqmodel
Clear all
ModelCloud/dbrx-base-converted-v2
Text Generation
•
Updated
Jul 9
•
2
ModelCloud/dbrx-instruct-converted-v2
Text Generation
•
Updated
Jul 9
•
6
ModelCloud/gemma-2-9b-it-gptq-4bit
Text Generation
•
Updated
Jul 9
•
823
•
3
ModelCloud/gemma-2-9b-gptq-4bit
Text Generation
•
Updated
Jul 9
•
27
ModelCloud/DeepSeek-V2-Lite-gptq-4bit
Text Generation
•
Updated
Jul 9
•
24
ModelCloud/internlm-2.5-7b-gptq-4bit
Feature Extraction
•
Updated
Jul 9
•
6
ModelCloud/internlm-2.5-7b-chat-gptq-4bit
Feature Extraction
•
Updated
Jul 9
•
12
ModelCloud/internlm-2.5-7b-chat-1m-gptq-4bit
Feature Extraction
•
Updated
Jul 9
•
8
ModelCloud/Mistral-Nemo-Instruct-2407-gptq-4bit
Text Generation
•
Updated
Jul 23
•
9.04k
•
3
ModelCloud/gemma-2-27b-it-gptq-4bit
Text Generation
•
Updated
Jul 23
•
7.04k
•
10
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
Updated
Jul 29
•
441
•
3
ModelCloud/Meta-Llama-3.1-8B-gptq-4bit
Text Generation
•
Updated
Jul 26
•
374
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit
Text Generation
•
Updated
Jul 27
•
148
•
4
ModelCloud/Mistral-Large-Instruct-2407-gptq-4bit
Text Generation
•
Updated
Jul 26
•
344
•
1
ModelCloud/Meta-Llama-3.1-405B-Instruct-gptq-4bit
Text Generation
•
Updated
Jul 30
•
144
•
2
ModelCloud/EXAONE-3.0-7.8B-Instruct-gptq-4bit
Updated
Aug 9
•
7
•
3
ModelCloud/GRIN-MoE-gptq-4bit
Updated
29 days ago
•
32
•
6