Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
datatab
/
YugoGPT-Quantize
like
0
Text Generation
Transformers
Safetensors
Serbian
mistral
text-generation-inference
Inference Endpoints
4-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
Edit model card
YugoGPT-Quantized-GGUF
Description
YugoGPT-Quantized-GGUF
Quantized by:
datatab
License:
apache-2.0
Author of model :
gordicaleksa/YugoGPT
Description
This repo contains Safetensors format model files for
YugoGPT
.
Downloads last month
47
Safetensors
Model size
3.86B params
Tensor type
F32
·
FP16
·
U8
·
Inference Examples
Text Generation
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to
Inference Endpoints (dedicated)
instead.
Model tree for
datatab/YugoGPT-Quantize
Base model
gordicaleksa/YugoGPT
Quantized
(
6
)
this model
Collection including
datatab/YugoGPT-Quantize
Yugo-GPT
Collection
Yugo-GPT class of LLM (45, 55, 60)
•
13 items
•
Updated
23 days ago