---
license: cc-by-nc-4.0
base_model: google/gemma-7b-it
tags:
- generated_from_trainer
- axolotl
- gemma
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
model-index:
- name: gemma-7b-openhermes
results: []
datasets:
- mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
language:
- en
library_name: transformers
pipeline_tag: text-generation
quantized_by: bartowski
---
## Exllama v2 Quantizations of gemma-7b-openhermes
Using turboderp's ExLlamaV2 v0.0.13 for quantization.
The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/abideen/gemma-7b-openhermes
No GQA - VRAM requirements will be higher
| Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
| -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
| [8_0](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/8_0) | 8.0 | 8.0 | 14.0 GB | 19.4 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/6_5) | 6.5 | 8.0 | 12.5 GB | 17.9 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/5_0) | 5.0 | 6.0 | 10.9 GB | 16.3 GB | Slightly lower quality vs 6.5, great for 12GB cards with 4k context. |
| [4_25](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/4_25) | 4.25 | 6.0 | 10.2 GB | 15.7 GB | GPTQ equivalent bits per weight, ideal for 16GB cards at 16k context |
| [3_5](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/3_5) | 3.5 | 6.0 | 9.5 GB | 14.9 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/gemma-7b-openhermes-exl2 gemma-7b-openhermes-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `gemma-7b-openhermes-exl2`:
```shell
mkdir gemma-7b-openhermes-exl2
huggingface-cli download bartowski/gemma-7b-openhermes-exl2 --local-dir gemma-7b-openhermes-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir gemma-7b-openhermes-exl2-6_5
huggingface-cli download bartowski/gemma-7b-openhermes-exl2 --revision 6_5 --local-dir gemma-7b-openhermes-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir gemma-7b-openhermes-exl2-6.5
huggingface-cli download bartowski/gemma-7b-openhermes-exl2 --revision 6_5 --local-dir gemma-7b-openhermes-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski