Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ianZzzzzz
/
GLM-130B-quant-int4-4gpu
like
12
Model card
Files
Files and versions
Community
5
YAML Metadata Warning:
empty or missing yaml metadata in repo card (
https://huggingface.co/docs/hub/model-cards#model-card-metadata
)
GLM-130B模型的int4量化版本,可在四张3090Ti的情况下进行推理。 An int4 quantized version of the GLM-130B model that can be inferred with 4 * 3090Ti .
license: apache-2.0
GLM-130B模型的int4量化版本,可在四张3090Ti的情况下进行推理。 An int4 quantized version of the GLM-130B model that can be inferred with 4 * 3090Ti .
license: apache-2.0
[email protected]
Downloads last month
-
Downloads are not tracked for this model.
How to track
Inference API
Unable to determine this model's library. Check the
docs
.