|
--- |
|
license: apache-2.0 |
|
base_model: |
|
- Qwen/Qwen3-8B |
|
tags: |
|
- gguf |
|
- llama.cpp |
|
- qwen |
|
- quantization |
|
--- |
|
|
|
# Qwen3-8B-GGUF |
|
|
|
## π€ μ½λλ² μ΄μ§ μ 곡 |
|
μ΄ λͺ¨λΈμ **μ½λλ² μ΄μ§(codebasic)**μμ GGUF ν¬λ§·μΌλ‘ λ³νΒ·λ°°ν¬νμμ΅λλ€. |
|
|
|
μ΄ λ¦¬ν¬μ§ν 리λ [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) λͺ¨λΈμ μ¬λ¬ GGUF μμν λ²μ μΌλ‘ μ 곡ν©λλ€. |
|
llama.cpp, text-generation-webui, koboldcpp λ± GGUF ν¬λ§·μ μ§μνλ λ€μν νκ²½μμ μ¬μ©ν μ μμ΅λλ€. |
|
|
|
--- |
|
|
|
## π μ 곡 νμΌ |
|
|
|
| νμΌλͺ
| μμν λ°©μ | λ©λͺ¨λ¦¬ μꡬλ(λλ΅) | μ€λͺ
| |
|
|--------|------------|----------------------|------| |
|
| `Qwen3-8B-F16.gguf` | FP16 (λΉμμν) | ~16GB | μλ³Έ FP16 κ°μ€μΉ (GPU/κ³ μ¬μ νκ²½) | |
|
| `Qwen3-8B-Q8_0.gguf` | Q8_0 | ~9GB | κ³ νμ§ μμν, κ±°μ FP16 μμ€μ μ νλ | |
|
|
|
> π‘ λ©λͺ¨λ¦¬ μꡬλμ μΆμ μΉμ΄λ©°, νκ²½μ λ°λΌ λ€λ₯Ό μ μμ΅λλ€. |
|
|
|
--- |
|
|
|
## π μ¬μ© λ°©λ² |
|
|
|
### 1. Docker (llama.cpp Q8_0 μμ) |
|
```bash |
|
docker run -v /path/to/models:/models \ |
|
ghcr.io/ggml-org/llama.cpp:full \ |
|
--run -m /models/Qwen3-8B/Qwen3-8B-Q8_0.gguf \ |
|
-p "μΈμ΄ λͺ¨λΈ μκ°" |
|
|