metadata
license: apache-2.0
gemma-2-9B-it-q4_0
This is a quantized version of the Gemma2 9B instruct model using the Q4_0 quantization method.
Model Details
- Original Model: Gemma2-9B-it
- Quantization Method: Q4_0
- Precision: 4-bit
Usage
You can use it directly with llama.cpp