morriszms commited on
Commit
654e70e
·
verified ·
1 Parent(s): 1332796

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,16 +1,16 @@
1
  ---
2
- language:
3
- - en
4
  library_name: transformers
5
- license: apache-2.0
 
 
 
 
 
 
 
6
  tags:
7
- - unsloth
8
- - transformers
9
- - gemma
10
- - bnb
11
  - TensorBlock
12
  - GGUF
13
- base_model: unsloth/codegemma-2b
14
  ---
15
 
16
  <div style="width: auto; margin-left: auto; margin-right: auto">
@@ -24,13 +24,12 @@ base_model: unsloth/codegemma-2b
24
  </div>
25
  </div>
26
 
27
- ## unsloth/codegemma-2b - GGUF
28
 
29
- This repo contains GGUF format model files for [unsloth/codegemma-2b](https://huggingface.co/unsloth/codegemma-2b).
30
 
31
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
32
 
33
-
34
  <div style="text-align: left; margin: 20px 0;">
35
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
36
  Run them on the TensorBlock client using your local machine ↗
@@ -39,7 +38,6 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
39
 
40
  ## Prompt template
41
 
42
-
43
  ```
44
 
45
  ```
@@ -48,18 +46,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
48
 
49
  | Filename | Quant type | File Size | Description |
50
  | -------- | ---------- | --------- | ----------- |
51
- | [codegemma-2b-Q2_K.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q2_K.gguf) | Q2_K | 1.078 GB | smallest, significant quality loss - not recommended for most purposes |
52
- | [codegemma-2b-Q3_K_S.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q3_K_S.gguf) | Q3_K_S | 1.200 GB | very small, high quality loss |
53
- | [codegemma-2b-Q3_K_M.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q3_K_M.gguf) | Q3_K_M | 1.289 GB | very small, high quality loss |
54
- | [codegemma-2b-Q3_K_L.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q3_K_L.gguf) | Q3_K_L | 1.365 GB | small, substantial quality loss |
55
- | [codegemma-2b-Q4_0.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q4_0.gguf) | Q4_0 | 1.445 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
56
- | [codegemma-2b-Q4_K_S.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q4_K_S.gguf) | Q4_K_S | 1.453 GB | small, greater quality loss |
57
- | [codegemma-2b-Q4_K_M.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q4_K_M.gguf) | Q4_K_M | 1.518 GB | medium, balanced quality - recommended |
58
- | [codegemma-2b-Q5_0.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q5_0.gguf) | Q5_0 | 1.675 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
59
- | [codegemma-2b-Q5_K_S.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q5_K_S.gguf) | Q5_K_S | 1.675 GB | large, low quality loss - recommended |
60
- | [codegemma-2b-Q5_K_M.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q5_K_M.gguf) | Q5_K_M | 1.713 GB | large, very low quality loss - recommended |
61
- | [codegemma-2b-Q6_K.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q6_K.gguf) | Q6_K | 1.921 GB | very large, extremely low quality loss |
62
- | [codegemma-2b-Q8_0.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q8_0.gguf) | Q8_0 | 2.486 GB | very large, extremely low quality loss - not recommended |
63
 
64
 
65
  ## Downloading instruction
 
1
  ---
 
 
2
  library_name: transformers
3
+ license: gemma
4
+ license_link: https://ai.google.dev/gemma/terms
5
+ extra_gated_heading: Access CodeGemma on Hugging Face
6
+ extra_gated_prompt: To access CodeGemma on Hugging Face, you’re required to review
7
+ and agree to Google’s usage license. To do this, please ensure you’re logged-in
8
+ to Hugging Face and click below. Requests are processed immediately.
9
+ extra_gated_button_content: Acknowledge license
10
+ base_model: google/codegemma-2b
11
  tags:
 
 
 
 
12
  - TensorBlock
13
  - GGUF
 
14
  ---
15
 
16
  <div style="width: auto; margin-left: auto; margin-right: auto">
 
24
  </div>
25
  </div>
26
 
27
+ ## google/codegemma-2b - GGUF
28
 
29
+ This repo contains GGUF format model files for [google/codegemma-2b](https://huggingface.co/google/codegemma-2b).
30
 
31
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
32
 
 
33
  <div style="text-align: left; margin: 20px 0;">
34
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
35
  Run them on the TensorBlock client using your local machine ↗
 
38
 
39
  ## Prompt template
40
 
 
41
  ```
42
 
43
  ```
 
46
 
47
  | Filename | Quant type | File Size | Description |
48
  | -------- | ---------- | --------- | ----------- |
49
+ | [codegemma-2b-Q2_K.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q2_K.gguf) | Q2_K | 1.158 GB | smallest, significant quality loss - not recommended for most purposes |
50
+ | [codegemma-2b-Q3_K_S.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q3_K_S.gguf) | Q3_K_S | 1.288 GB | very small, high quality loss |
51
+ | [codegemma-2b-Q3_K_M.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q3_K_M.gguf) | Q3_K_M | 1.384 GB | very small, high quality loss |
52
+ | [codegemma-2b-Q3_K_L.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q3_K_L.gguf) | Q3_K_L | 1.466 GB | small, substantial quality loss |
53
+ | [codegemma-2b-Q4_0.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q4_0.gguf) | Q4_0 | 1.551 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
54
+ | [codegemma-2b-Q4_K_S.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q4_K_S.gguf) | Q4_K_S | 1.560 GB | small, greater quality loss |
55
+ | [codegemma-2b-Q4_K_M.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q4_K_M.gguf) | Q4_K_M | 1.630 GB | medium, balanced quality - recommended |
56
+ | [codegemma-2b-Q5_0.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q5_0.gguf) | Q5_0 | 1.799 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
57
+ | [codegemma-2b-Q5_K_S.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q5_K_S.gguf) | Q5_K_S | 1.799 GB | large, low quality loss - recommended |
58
+ | [codegemma-2b-Q5_K_M.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q5_K_M.gguf) | Q5_K_M | 1.840 GB | large, very low quality loss - recommended |
59
+ | [codegemma-2b-Q6_K.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q6_K.gguf) | Q6_K | 2.062 GB | very large, extremely low quality loss |
60
+ | [codegemma-2b-Q8_0.gguf](https://huggingface.co/tensorblock/codegemma-2b-GGUF/blob/main/codegemma-2b-Q8_0.gguf) | Q8_0 | 2.669 GB | very large, extremely low quality loss - not recommended |
61
 
62
 
63
  ## Downloading instruction
codegemma-2b-Q2_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f373b6a9e6c0324e5ad6e2e1a9a0d69d1b58bc694b34338f8fb926f311c175e8
3
- size 1157923456
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c78b7fcfc96d94a4238b110f9ed54024d6dbfc1f05ccfe48b75c03f6e9a8ab12
3
+ size 1157923392
codegemma-2b-Q3_K_L.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:885c14e076bf6de0e190546dbdd3a3d5f467303728b8075d8a7be4e6c9667f32
3
- size 1465590400
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:996edf1c1afb79a264aa3f5aefa3347fc1c4e742ebf7285a82c5d1fd6173c2eb
3
+ size 1465590336
codegemma-2b-Q3_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7e7f792f42ed76219b77f830d18d3ac945e7b5fa3bc109478aec26946d84f7cb
3
- size 1383801472
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27bbcbaa978d4d72db447b6a66cdb025448e701ef140a6dbdc47cf12dcabcb34
3
+ size 1383801408
codegemma-2b-Q3_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ab18c7a655507bd17229f319951ab7d8c48e7eea557ba49e40c9d5399f5ffb0
3
- size 1287979648
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abe3be88a14cdd4413057662685ec44e00a0fadc757e5b01b64770e661122853
3
+ size 1287979584
codegemma-2b-Q4_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:591d0ab60a1c3ee5cd4543ce5a063ca01df428365991d70de396bee3bdf9d19e
3
- size 1551188608
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:130d67e375934ed380a3f70abcb8587d2189f80d4822d03893366e01e45bd98e
3
+ size 1551188544
codegemma-2b-Q4_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a715e91c9dab4cb2f2ac9b876128f63e1b44072ab7eeec79115127a51c565a1e
3
- size 1630261888
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b5d61b7a3c07a43f477ca4a9a60a0e4f6e2fd9891f1ec14749d370128fc41c2
3
+ size 1630261824
codegemma-2b-Q4_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:76719ff143db4ad0de58310b36b51229c2623a6628fd8d3b9454ede8a48545da
3
- size 1559839360
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da957381b97393749b04cf65c2eb932360b2f95d476e106d3684bb37d1e53981
3
+ size 1559839296
codegemma-2b-Q5_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a1a18f2869ccb30eee8408a4459d21a7f5d163fb6759ab3fb9621c591324c84c
3
- size 1798914688
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40643820af73ed8cd90f58f9914d00067d7d222972475fd58afb5e877d0470f2
3
+ size 1798914624
codegemma-2b-Q5_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f7cc1ede1296de5888a6f37d15f826a57ecc7f2fe305536a5c7f190ca84ace90
3
- size 1839649408
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5eb96286d9542bc8bc1261a5d211c422b15832bf4403194dc5bf98886f635ed0
3
+ size 1839649344
codegemma-2b-Q5_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:63ab28ec293714cfad64de1f7c0c779ca1fa64a2491dcb625d9becff808b8cf8
3
- size 1798914688
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c84c85636caf192c0d1832979db47c14c85f69ef7308b2d250df3cb80d7a768d
3
+ size 1798914624
codegemma-2b-Q6_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:46bb6814bfd1db486cae85a972b0a9a49a26f236848554d158e556c8f7d80068
3
- size 2062123648
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:386b8821aa78daaaf2d6a217c9adb39cf57c21c5ca800b0fb1f7aec26bbcf208
3
+ size 2062123584
codegemma-2b-Q8_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d5298016d8656691c0743db37e206e6244f9a2efcf390c2cd7805e8d7240d59f
3
- size 2669068928
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a2b42b922d80a86eadc172845b8416fef63b8d77402ce635a24b94462ee863a
3
+ size 2669068864