Text Generation
GGUF
TensorBlock
GGUF
Inference Endpoints
morriszms commited on
Commit
54cecdc
·
verified ·
1 Parent(s): 2de2eef

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ bloom-1b7-fp32-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ bloom-1b7-fp32-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ bloom-1b7-fp32-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ bloom-1b7-fp32-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ bloom-1b7-fp32-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ bloom-1b7-fp32-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ bloom-1b7-fp32-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ bloom-1b7-fp32-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ bloom-1b7-fp32-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ bloom-1b7-fp32-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ bloom-1b7-fp32-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ bloom-1b7-fp32-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bigscience-bloom-rail-1.0
3
+ language:
4
+ - ak
5
+ - ar
6
+ - as
7
+ - bm
8
+ - bn
9
+ - ca
10
+ - code
11
+ - en
12
+ - es
13
+ - eu
14
+ - fon
15
+ - fr
16
+ - gu
17
+ - hi
18
+ - id
19
+ - ig
20
+ - ki
21
+ - kn
22
+ - lg
23
+ - ln
24
+ - ml
25
+ - mr
26
+ - ne
27
+ - nso
28
+ - ny
29
+ - or
30
+ - pa
31
+ - pt
32
+ - rn
33
+ - rw
34
+ - sn
35
+ - st
36
+ - sw
37
+ - ta
38
+ - te
39
+ - tn
40
+ - ts
41
+ - tum
42
+ - tw
43
+ - ur
44
+ - vi
45
+ - wo
46
+ - xh
47
+ - yo
48
+ - zh
49
+ - zhs
50
+ - zht
51
+ - zu
52
+ pipeline_tag: text-generation
53
+ tags:
54
+ - TensorBlock
55
+ - GGUF
56
+ base_model: LazarusNLP/bloom-1b7-fp32
57
+ ---
58
+
59
+ <div style="width: auto; margin-left: auto; margin-right: auto">
60
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
61
+ </div>
62
+ <div style="display: flex; justify-content: space-between; width: 100%;">
63
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
64
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
65
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
66
+ </p>
67
+ </div>
68
+ </div>
69
+
70
+ ## LazarusNLP/bloom-1b7-fp32 - GGUF
71
+
72
+ This repo contains GGUF format model files for [LazarusNLP/bloom-1b7-fp32](https://huggingface.co/LazarusNLP/bloom-1b7-fp32).
73
+
74
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
75
+
76
+ <div style="text-align: left; margin: 20px 0;">
77
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
78
+ Run them on the TensorBlock client using your local machine ↗
79
+ </a>
80
+ </div>
81
+
82
+ ## Prompt template
83
+
84
+ ```
85
+
86
+ ```
87
+
88
+ ## Model file specification
89
+
90
+ | Filename | Quant type | File Size | Description |
91
+ | -------- | ---------- | --------- | ----------- |
92
+ | [bloom-1b7-fp32-Q2_K.gguf](https://huggingface.co/tensorblock/bloom-1b7-fp32-GGUF/blob/main/bloom-1b7-fp32-Q2_K.gguf) | Q2_K | 1.053 GB | smallest, significant quality loss - not recommended for most purposes |
93
+ | [bloom-1b7-fp32-Q3_K_S.gguf](https://huggingface.co/tensorblock/bloom-1b7-fp32-GGUF/blob/main/bloom-1b7-fp32-Q3_K_S.gguf) | Q3_K_S | 1.177 GB | very small, high quality loss |
94
+ | [bloom-1b7-fp32-Q3_K_M.gguf](https://huggingface.co/tensorblock/bloom-1b7-fp32-GGUF/blob/main/bloom-1b7-fp32-Q3_K_M.gguf) | Q3_K_M | 1.286 GB | very small, high quality loss |
95
+ | [bloom-1b7-fp32-Q3_K_L.gguf](https://huggingface.co/tensorblock/bloom-1b7-fp32-GGUF/blob/main/bloom-1b7-fp32-Q3_K_L.gguf) | Q3_K_L | 1.346 GB | small, substantial quality loss |
96
+ | [bloom-1b7-fp32-Q4_0.gguf](https://huggingface.co/tensorblock/bloom-1b7-fp32-GGUF/blob/main/bloom-1b7-fp32-Q4_0.gguf) | Q4_0 | 1.405 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
97
+ | [bloom-1b7-fp32-Q4_K_S.gguf](https://huggingface.co/tensorblock/bloom-1b7-fp32-GGUF/blob/main/bloom-1b7-fp32-Q4_K_S.gguf) | Q4_K_S | 1.411 GB | small, greater quality loss |
98
+ | [bloom-1b7-fp32-Q4_K_M.gguf](https://huggingface.co/tensorblock/bloom-1b7-fp32-GGUF/blob/main/bloom-1b7-fp32-Q4_K_M.gguf) | Q4_K_M | 1.495 GB | medium, balanced quality - recommended |
99
+ | [bloom-1b7-fp32-Q5_0.gguf](https://huggingface.co/tensorblock/bloom-1b7-fp32-GGUF/blob/main/bloom-1b7-fp32-Q5_0.gguf) | Q5_0 | 1.620 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
100
+ | [bloom-1b7-fp32-Q5_K_S.gguf](https://huggingface.co/tensorblock/bloom-1b7-fp32-GGUF/blob/main/bloom-1b7-fp32-Q5_K_S.gguf) | Q5_K_S | 1.620 GB | large, low quality loss - recommended |
101
+ | [bloom-1b7-fp32-Q5_K_M.gguf](https://huggingface.co/tensorblock/bloom-1b7-fp32-GGUF/blob/main/bloom-1b7-fp32-Q5_K_M.gguf) | Q5_K_M | 1.687 GB | large, very low quality loss - recommended |
102
+ | [bloom-1b7-fp32-Q6_K.gguf](https://huggingface.co/tensorblock/bloom-1b7-fp32-GGUF/blob/main/bloom-1b7-fp32-Q6_K.gguf) | Q6_K | 1.849 GB | very large, extremely low quality loss |
103
+ | [bloom-1b7-fp32-Q8_0.gguf](https://huggingface.co/tensorblock/bloom-1b7-fp32-GGUF/blob/main/bloom-1b7-fp32-Q8_0.gguf) | Q8_0 | 2.390 GB | very large, extremely low quality loss - not recommended |
104
+
105
+
106
+ ## Downloading instruction
107
+
108
+ ### Command line
109
+
110
+ Firstly, install Huggingface Client
111
+
112
+ ```shell
113
+ pip install -U "huggingface_hub[cli]"
114
+ ```
115
+
116
+ Then, downoad the individual model file the a local directory
117
+
118
+ ```shell
119
+ huggingface-cli download tensorblock/bloom-1b7-fp32-GGUF --include "bloom-1b7-fp32-Q2_K.gguf" --local-dir MY_LOCAL_DIR
120
+ ```
121
+
122
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
123
+
124
+ ```shell
125
+ huggingface-cli download tensorblock/bloom-1b7-fp32-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
126
+ ```
bloom-1b7-fp32-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ce75762a05b2b6be474e65dd114481f1e68e6bfe26df6db7c40a58e5f2df1dc
3
+ size 1052760992
bloom-1b7-fp32-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79e79cb3c6d6733098ed19f9b33959b0e69f8ea55fbb669a1a05bcd8019b4d56
3
+ size 1346378656
bloom-1b7-fp32-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ab5741751f14d0ab2cd7a070c4b707534969e23ae5bd5d39d974ca193d933a1
3
+ size 1285561248
bloom-1b7-fp32-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18e1ce0077a9f6e1edaa7f61e71972ca359bc2007d52491b5f777ad6eb0df464
3
+ size 1176509344
bloom-1b7-fp32-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a88b48d1eee392aec6c92de0ad0bd018846a17e9689526efe5de3eec3b8b9522
3
+ size 1405180832
bloom-1b7-fp32-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d4ffb3352dae93ce9f708b6873eb516f809ae0b5c31df8de94b191f29958056
3
+ size 1494834080
bloom-1b7-fp32-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab51394e67a9bbcb519e4f9afbb0fc1025b72c1d9f966384ae8ca6920f1cb456
3
+ size 1411472288
bloom-1b7-fp32-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14c8edbdbd3ab5f53142243f5394eb5cc18e5ff7779307c2c4a52a7dd2af866e
3
+ size 1620401056
bloom-1b7-fp32-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51416fffb16cbeb8219d0f9a8bf0cc7cfe63485c25b5648db995cba5bc23809a
3
+ size 1687247776
bloom-1b7-fp32-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a961d0c8736c757c725e98b07c26f91a66c2e9fac2eab7375aa1336343b916ec
3
+ size 1620401056
bloom-1b7-fp32-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a50990ce8960bb7a0f3de219b870ca6dd2e467a9cfe6a682b2ba0ced9ab3444d
3
+ size 1849072544
bloom-1b7-fp32-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f0b7d6979f873e3ff513ed958c81bb1078b80542624294dfc826a4b72feae82
3
+ size 2390498208