Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
AlbertUnn
/
LlammasGGUF
like
1
GGUF
English
Estonian
Inference Endpoints
conversational
Model card
Files
Files and versions
Community
1
Deploy
Use this model
Edit model card
4-bit Llammas in gguf
4-bit Llammas in gguf
This is a 4-bit quantized version of
TartuNLP/Llammas
Llama2 model in gguf file format.
Downloads last month
8
GGUF
Model size
6.74B params
Architecture
llama
4-bit
Q4_K_M
Inference API
Unable to determine this model's library. Check the
docs
.