OLMo-1.7-7b-GGUF

This repo contains GGUF files of the allenai/OLMo-1.7-7B-hf model. These files can be used with llama.cpp or other software like ollama and LM Studio. Most quanitzation versions based on the bf16 file are provided.

Note: this is a base model, not an instruction-tuned model. Hence, there is no specified prompt template.

For GGUF conversions of other OLMo versions, refer to nopperl/OLMo-1B-GGUF and nopperl/OLMo-7B-GGUF.

Downloads last month
83
GGUF
Model size
6.89B params
Architecture
olmo

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Collection including nopperl/OLMo-1.7-7B-GGUF