sychonix's picture
Update README.md
5bff3ff verified
---
license: apache-2.0
tags:
- gguf
- coding
- quantized
- Q6_K
- olympiccoder
- llama.cpp
model_type: llama
inference: false
base_model: bartowski/open-r1_OlympicCoder-7B-GGUF
---
# 🧐 OlympicCoder 7B Q6
Optimized and quantized version of OlympicCoder 7B designed for algorithmic reasoning, competitive programming, and symbolic inference.
---
## πŸ“Š Model Details
* **Model Name**: OlympicCoder 7B Q6
* **Quantization**: Q6\_K
* **Format**: GGUF
* **Size**: 6.25 GB
* **Architecture**: LLaMA-style 7B
* **Base Model**: [open-r1\_OlympicCoder-7B-GGUF](https://huggingface.co/bartowski/open-r1_OlympicCoder-7B-GGUF)
---
## πŸ› οΈ Use Cases
* βš–οΈ Competitive programming and Codeforces-style tasks
* πŸ“ˆ Symbolic reasoning and algorithmic inference
* πŸ’» Code generation and technical prompts
---
## πŸš€ How to Run (with llama.cpp)
```bash
./main -m open-r1_OlympicCoder-7B-Q6_K.gguf -p "Write a function that checks if a number is prime."
```
Other tools:
* **LM Studio**: Import `.gguf` and chat directly
* **KoboldCpp** / **text-generation-webui**: Load as GGUF model
---
## πŸ“„ License
Apache 2.0 β€” free for commercial and research use.
---