morriszms commited on
Commit
3a60a0c
1 Parent(s): 974da9a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -12
README.md CHANGED
@@ -32,8 +32,16 @@ This repo contains GGUF format model files for [rinna/gemma-2-baku-2b-it](https:
32
 
33
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
34
 
 
 
 
 
 
 
 
35
  ## Prompt template
36
 
 
37
  ```
38
  <bos><start_of_turn>user
39
  {prompt}<end_of_turn>
@@ -44,18 +52,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
44
 
45
  | Filename | Quant type | File Size | Description |
46
  | -------- | ---------- | --------- | ----------- |
47
- | [gemma-2-baku-2b-it-Q2_K.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/tree/main/gemma-2-baku-2b-it-Q2_K.gguf) | Q2_K | 1.145 GB | smallest, significant quality loss - not recommended for most purposes |
48
- | [gemma-2-baku-2b-it-Q3_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/tree/main/gemma-2-baku-2b-it-Q3_K_S.gguf) | Q3_K_S | 1.267 GB | very small, high quality loss |
49
- | [gemma-2-baku-2b-it-Q3_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/tree/main/gemma-2-baku-2b-it-Q3_K_M.gguf) | Q3_K_M | 1.361 GB | very small, high quality loss |
50
- | [gemma-2-baku-2b-it-Q3_K_L.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/tree/main/gemma-2-baku-2b-it-Q3_K_L.gguf) | Q3_K_L | 1.444 GB | small, substantial quality loss |
51
- | [gemma-2-baku-2b-it-Q4_0.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/tree/main/gemma-2-baku-2b-it-Q4_0.gguf) | Q4_0 | 1.518 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
52
- | [gemma-2-baku-2b-it-Q4_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/tree/main/gemma-2-baku-2b-it-Q4_K_S.gguf) | Q4_K_S | 1.526 GB | small, greater quality loss |
53
- | [gemma-2-baku-2b-it-Q4_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/tree/main/gemma-2-baku-2b-it-Q4_K_M.gguf) | Q4_K_M | 1.591 GB | medium, balanced quality - recommended |
54
- | [gemma-2-baku-2b-it-Q5_0.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/tree/main/gemma-2-baku-2b-it-Q5_0.gguf) | Q5_0 | 1.753 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
55
- | [gemma-2-baku-2b-it-Q5_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/tree/main/gemma-2-baku-2b-it-Q5_K_S.gguf) | Q5_K_S | 1.753 GB | large, low quality loss - recommended |
56
- | [gemma-2-baku-2b-it-Q5_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/tree/main/gemma-2-baku-2b-it-Q5_K_M.gguf) | Q5_K_M | 1.791 GB | large, very low quality loss - recommended |
57
- | [gemma-2-baku-2b-it-Q6_K.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/tree/main/gemma-2-baku-2b-it-Q6_K.gguf) | Q6_K | 2.004 GB | very large, extremely low quality loss |
58
- | [gemma-2-baku-2b-it-Q8_0.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/tree/main/gemma-2-baku-2b-it-Q8_0.gguf) | Q8_0 | 2.593 GB | very large, extremely low quality loss - not recommended |
59
 
60
 
61
  ## Downloading instruction
 
32
 
33
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
34
 
35
+
36
+ <div style="text-align: left; margin: 20px 0;">
37
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
38
+ Run them on the TensorBlock client using your local machine ↗
39
+ </a>
40
+ </div>
41
+
42
  ## Prompt template
43
 
44
+
45
  ```
46
  <bos><start_of_turn>user
47
  {prompt}<end_of_turn>
 
52
 
53
  | Filename | Quant type | File Size | Description |
54
  | -------- | ---------- | --------- | ----------- |
55
+ | [gemma-2-baku-2b-it-Q2_K.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/blob/main/gemma-2-baku-2b-it-Q2_K.gguf) | Q2_K | 1.145 GB | smallest, significant quality loss - not recommended for most purposes |
56
+ | [gemma-2-baku-2b-it-Q3_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/blob/main/gemma-2-baku-2b-it-Q3_K_S.gguf) | Q3_K_S | 1.267 GB | very small, high quality loss |
57
+ | [gemma-2-baku-2b-it-Q3_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/blob/main/gemma-2-baku-2b-it-Q3_K_M.gguf) | Q3_K_M | 1.361 GB | very small, high quality loss |
58
+ | [gemma-2-baku-2b-it-Q3_K_L.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/blob/main/gemma-2-baku-2b-it-Q3_K_L.gguf) | Q3_K_L | 1.444 GB | small, substantial quality loss |
59
+ | [gemma-2-baku-2b-it-Q4_0.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/blob/main/gemma-2-baku-2b-it-Q4_0.gguf) | Q4_0 | 1.518 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
60
+ | [gemma-2-baku-2b-it-Q4_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/blob/main/gemma-2-baku-2b-it-Q4_K_S.gguf) | Q4_K_S | 1.526 GB | small, greater quality loss |
61
+ | [gemma-2-baku-2b-it-Q4_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/blob/main/gemma-2-baku-2b-it-Q4_K_M.gguf) | Q4_K_M | 1.591 GB | medium, balanced quality - recommended |
62
+ | [gemma-2-baku-2b-it-Q5_0.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/blob/main/gemma-2-baku-2b-it-Q5_0.gguf) | Q5_0 | 1.753 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
63
+ | [gemma-2-baku-2b-it-Q5_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/blob/main/gemma-2-baku-2b-it-Q5_K_S.gguf) | Q5_K_S | 1.753 GB | large, low quality loss - recommended |
64
+ | [gemma-2-baku-2b-it-Q5_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/blob/main/gemma-2-baku-2b-it-Q5_K_M.gguf) | Q5_K_M | 1.791 GB | large, very low quality loss - recommended |
65
+ | [gemma-2-baku-2b-it-Q6_K.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/blob/main/gemma-2-baku-2b-it-Q6_K.gguf) | Q6_K | 2.004 GB | very large, extremely low quality loss |
66
+ | [gemma-2-baku-2b-it-Q8_0.gguf](https://huggingface.co/tensorblock/gemma-2-baku-2b-it-GGUF/blob/main/gemma-2-baku-2b-it-Q8_0.gguf) | Q8_0 | 2.593 GB | very large, extremely low quality loss - not recommended |
67
 
68
 
69
  ## Downloading instruction