|
--- |
|
license: apache-2.0 |
|
tags: |
|
- gguf |
|
- coding |
|
- quantized |
|
- Q6_K |
|
- olympiccoder |
|
- llama.cpp |
|
model_type: llama |
|
inference: false |
|
base_model: bartowski/open-r1_OlympicCoder-7B-GGUF |
|
--- |
|
|
|
# π§ OlympicCoder 7B Q6 |
|
|
|
Optimized and quantized version of OlympicCoder 7B designed for algorithmic reasoning, competitive programming, and symbolic inference. |
|
|
|
--- |
|
|
|
## π Model Details |
|
|
|
* **Model Name**: OlympicCoder 7B Q6 |
|
* **Quantization**: Q6\_K |
|
* **Format**: GGUF |
|
* **Size**: 6.25 GB |
|
* **Architecture**: LLaMA-style 7B |
|
* **Base Model**: [open-r1\_OlympicCoder-7B-GGUF](https://huggingface.co/bartowski/open-r1_OlympicCoder-7B-GGUF) |
|
|
|
--- |
|
|
|
## π οΈ Use Cases |
|
|
|
* βοΈ Competitive programming and Codeforces-style tasks |
|
* π Symbolic reasoning and algorithmic inference |
|
* π» Code generation and technical prompts |
|
|
|
--- |
|
|
|
## π How to Run (with llama.cpp) |
|
|
|
```bash |
|
./main -m open-r1_OlympicCoder-7B-Q6_K.gguf -p "Write a function that checks if a number is prime." |
|
``` |
|
|
|
Other tools: |
|
|
|
* **LM Studio**: Import `.gguf` and chat directly |
|
* **KoboldCpp** / **text-generation-webui**: Load as GGUF model |
|
|
|
--- |
|
|
|
## π License |
|
|
|
Apache 2.0 β free for commercial and research use. |
|
|
|
--- |
|
|