--- library_name: transformers extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access CodeGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license pipeline_tag: text-generation widget: - text: > <start_of_turn>user Write a Python function to calculate the nth fibonacci number.<end_of_turn> <start_of_turn>model inference: parameters: max_new_tokens: 200 license: gemma license_link: https://ai.google.dev/gemma/terms quantized_by: bartowski --- ## Exllama v2 Quantizations of codegemma-7b-it Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.18">turboderp's ExLlamaV2 v0.0.18</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/google/codegemma-7b-it ## Prompt format ``` <bos><start_of_turn>user {prompt}<end_of_turn> <start_of_turn>model ``` ## Available sizes No GQA - VRAM requirements will be higher | Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description | | -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- | | [8_0](https://huggingface.co/bartowski/codegemma-7b-it-exl2/tree/8_0) | 8.0 | 8.0 | 14.0 GB | 19.4 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/codegemma-7b-it-exl2/tree/6_5) | 6.5 | 8.0 | 12.5 GB | 17.9 GB | Near unquantized performance at vastly reduced size, **recommended**. | | [5_0](https://huggingface.co/bartowski/codegemma-7b-it-exl2/tree/5_0) | 5.0 | 6.0 | 10.9 GB | 16.3 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. | | [4_25](https://huggingface.co/bartowski/codegemma-7b-it-exl2/tree/4_25) | 4.25 | 6.0 | 10.2 GB | 15.7 GB | GPTQ equivalent bits per weight. | | [3_5](https://huggingface.co/bartowski/codegemma-7b-it-exl2/tree/3_5) | 3.5 | 6.0 | 9.5 GB | 14.9 GB | Lower quality, not recommended. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/codegemma-7b-it-exl2 codegemma-7b-it-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `codegemma-7b-it-exl2`: ```shell mkdir codegemma-7b-it-exl2 huggingface-cli download bartowski/codegemma-7b-it-exl2 --local-dir codegemma-7b-it-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir codegemma-7b-it-exl2-6_5 huggingface-cli download bartowski/codegemma-7b-it-exl2 --revision 6_5 --local-dir codegemma-7b-it-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir codegemma-7b-it-exl2-6.5 huggingface-cli download bartowski/codegemma-7b-it-exl2 --revision 6_5 --local-dir codegemma-7b-it-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski