File size: 1,187 Bytes
5476772 5bff3ff 5476772 ccf826e 5bff3ff ccf826e 5bff3ff ccf826e 5bff3ff ccf826e 5bff3ff ccf826e 5bff3ff ccf826e 5bff3ff ccf826e 5bff3ff ccf826e 5bff3ff |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
license: apache-2.0
tags:
- gguf
- coding
- quantized
- Q6_K
- olympiccoder
- llama.cpp
model_type: llama
inference: false
base_model: bartowski/open-r1_OlympicCoder-7B-GGUF
---
# π§ OlympicCoder 7B Q6
Optimized and quantized version of OlympicCoder 7B designed for algorithmic reasoning, competitive programming, and symbolic inference.
---
## π Model Details
* **Model Name**: OlympicCoder 7B Q6
* **Quantization**: Q6\_K
* **Format**: GGUF
* **Size**: 6.25 GB
* **Architecture**: LLaMA-style 7B
* **Base Model**: [open-r1\_OlympicCoder-7B-GGUF](https://huggingface.co/bartowski/open-r1_OlympicCoder-7B-GGUF)
---
## π οΈ Use Cases
* βοΈ Competitive programming and Codeforces-style tasks
* π Symbolic reasoning and algorithmic inference
* π» Code generation and technical prompts
---
## π How to Run (with llama.cpp)
```bash
./main -m open-r1_OlympicCoder-7B-Q6_K.gguf -p "Write a function that checks if a number is prime."
```
Other tools:
* **LM Studio**: Import `.gguf` and chat directly
* **KoboldCpp** / **text-generation-webui**: Load as GGUF model
---
## π License
Apache 2.0 β free for commercial and research use.
---
|