Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
smpanaro
/
pythia-1.4b-AutoGPTQ-4bit-128g
like
0
Text Generation
Transformers
wikitext
gpt_neox
Inference Endpoints
4-bit precision
gptq
License:
mit
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
b3bc4c9
pythia-1.4b-AutoGPTQ-4bit-128g
1 contributor
History:
3 commits
smpanaro
Create README.md
b3bc4c9
verified
10 months ago
.gitattributes
Safe
1.52 kB
initial commit
10 months ago
README.md
Safe
1.19 kB
Create README.md
10 months ago
config.json
Safe
1.11 kB
Upload of AutoGPTQ quantized model
10 months ago
gptq_model-4bit-128g.safetensors
Safe
1.04 GB
LFS
Upload of AutoGPTQ quantized model
10 months ago
quantize_config.json
Safe
314 Bytes
Upload of AutoGPTQ quantized model
10 months ago