File size: 324 Bytes
34b2922 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# gemma-2-9B-it-q4_0
This is a quantized version of the Gemma2 9B instruct model using the Q4_0 quantization method.
## Model Details
- **Original Model**: [Gemma2-9B-it](https://huggingface.co/google/gemma-2-9b-it)
- **Quantization Method**: Q4_0
- **Precision**: 4-bit
## Usage
You can use it directly with llama.cpp
|