Qwen3-8B-GGUF
π€ μ½λλ² μ΄μ§ μ 곡
μ΄ λͺ¨λΈμ **μ½λλ² μ΄μ§(codebasic)**μμ GGUF ν¬λ§·μΌλ‘ λ³νΒ·λ°°ν¬νμμ΅λλ€.
μ΄ λ¦¬ν¬μ§ν 리λ Qwen3-8B λͺ¨λΈμ μ¬λ¬ GGUF μμν λ²μ μΌλ‘ μ 곡ν©λλ€.
llama.cpp, text-generation-webui, koboldcpp λ± GGUF ν¬λ§·μ μ§μνλ λ€μν νκ²½μμ μ¬μ©ν μ μμ΅λλ€.
π μ 곡 νμΌ
νμΌλͺ | μμν λ°©μ | λ©λͺ¨λ¦¬ μꡬλ(λλ΅) | μ€λͺ |
---|---|---|---|
Qwen3-8B-F16.gguf |
FP16 (λΉμμν) | ~16GB | μλ³Έ FP16 κ°μ€μΉ (GPU/κ³ μ¬μ νκ²½) |
Qwen3-8B-Q8_0.gguf |
Q8_0 | ~9GB | κ³ νμ§ μμν, κ±°μ FP16 μμ€μ μ νλ |
π‘ λ©λͺ¨λ¦¬ μꡬλμ μΆμ μΉμ΄λ©°, νκ²½μ λ°λΌ λ€λ₯Ό μ μμ΅λλ€.
π μ¬μ© λ°©λ²
1. Docker (llama.cpp Q8_0 μμ)
docker run -v /path/to/models:/models \
ghcr.io/ggml-org/llama.cpp:full \
--run -m /models/Qwen3-8B/Qwen3-8B-Q8_0.gguf \
-p "μΈμ΄ λͺ¨λΈ μκ°"
- Downloads last month
- 1,750
Hardware compatibility
Log In
to view the estimation
2-bit
4-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support