--- library_name: transformers tags: [] extra_gated_heading: "Access Gemma on Hugging Face" extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately." extra_gated_button_content: "Acknowledge license" license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms --- # Gemma-2B GGUF This is a quantized version of the [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) model using [llama.cpp](https://github.com/ggerganov/llama.cpp). This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://huggingface.co/google/gemma-7b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B base model](https://huggingface.co/google/gemma-2b). **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) ## ⚡ Quants * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_s`: Uses Q3_K for all tensors * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q4_k_s`: Uses Q4_K for all tensors * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q5_k_s`: Uses Q5_K for all tensors * `q6_k`: Uses Q8_K for all tensors * `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. ## 💻 Usage This model can be used with the latest version of llama.cpp but is not necessarily compatible with other tools yet.