morriszms commited on
Commit
a62a407
·
verified ·
1 Parent(s): d8c024a

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ firefly-mixtral-8x7b-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ firefly-mixtral-8x7b-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ firefly-mixtral-8x7b-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ firefly-mixtral-8x7b-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ firefly-mixtral-8x7b-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ firefly-mixtral-8x7b-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ firefly-mixtral-8x7b-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ firefly-mixtral-8x7b-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ firefly-mixtral-8x7b-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ firefly-mixtral-8x7b-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ firefly-mixtral-8x7b-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ firefly-mixtral-8x7b-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model: YeungNLP/firefly-mixtral-8x7b
6
+ tags:
7
+ - TensorBlock
8
+ - GGUF
9
+ ---
10
+
11
+ <div style="width: auto; margin-left: auto; margin-right: auto">
12
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
13
+ </div>
14
+ <div style="display: flex; justify-content: space-between; width: 100%;">
15
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
16
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
17
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
18
+ </p>
19
+ </div>
20
+ </div>
21
+
22
+ ## YeungNLP/firefly-mixtral-8x7b - GGUF
23
+
24
+ This repo contains GGUF format model files for [YeungNLP/firefly-mixtral-8x7b](https://huggingface.co/YeungNLP/firefly-mixtral-8x7b).
25
+
26
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
27
+
28
+ <div style="text-align: left; margin: 20px 0;">
29
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
30
+ Run them on the TensorBlock client using your local machine ↗
31
+ </a>
32
+ </div>
33
+
34
+ ## Prompt template
35
+
36
+ ```
37
+
38
+ ```
39
+
40
+ ## Model file specification
41
+
42
+ | Filename | Quant type | File Size | Description |
43
+ | -------- | ---------- | --------- | ----------- |
44
+ | [firefly-mixtral-8x7b-Q2_K.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q2_K.gguf) | Q2_K | 17.311 GB | smallest, significant quality loss - not recommended for most purposes |
45
+ | [firefly-mixtral-8x7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q3_K_S.gguf) | Q3_K_S | 20.433 GB | very small, high quality loss |
46
+ | [firefly-mixtral-8x7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q3_K_M.gguf) | Q3_K_M | 22.546 GB | very small, high quality loss |
47
+ | [firefly-mixtral-8x7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q3_K_L.gguf) | Q3_K_L | 24.170 GB | small, substantial quality loss |
48
+ | [firefly-mixtral-8x7b-Q4_0.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q4_0.gguf) | Q4_0 | 26.444 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
49
+ | [firefly-mixtral-8x7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q4_K_S.gguf) | Q4_K_S | 26.746 GB | small, greater quality loss |
50
+ | [firefly-mixtral-8x7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q4_K_M.gguf) | Q4_K_M | 28.448 GB | medium, balanced quality - recommended |
51
+ | [firefly-mixtral-8x7b-Q5_0.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q5_0.gguf) | Q5_0 | 32.231 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
52
+ | [firefly-mixtral-8x7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q5_K_S.gguf) | Q5_K_S | 32.231 GB | large, low quality loss - recommended |
53
+ | [firefly-mixtral-8x7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q5_K_M.gguf) | Q5_K_M | 33.230 GB | large, very low quality loss - recommended |
54
+ | [firefly-mixtral-8x7b-Q6_K.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q6_K.gguf) | Q6_K | 38.381 GB | very large, extremely low quality loss |
55
+ | [firefly-mixtral-8x7b-Q8_0.gguf](https://huggingface.co/tensorblock/firefly-mixtral-8x7b-GGUF/blob/main/firefly-mixtral-8x7b-Q8_0.gguf) | Q8_0 | 49.626 GB | very large, extremely low quality loss - not recommended |
56
+
57
+
58
+ ## Downloading instruction
59
+
60
+ ### Command line
61
+
62
+ Firstly, install Huggingface Client
63
+
64
+ ```shell
65
+ pip install -U "huggingface_hub[cli]"
66
+ ```
67
+
68
+ Then, downoad the individual model file the a local directory
69
+
70
+ ```shell
71
+ huggingface-cli download tensorblock/firefly-mixtral-8x7b-GGUF --include "firefly-mixtral-8x7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
72
+ ```
73
+
74
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
75
+
76
+ ```shell
77
+ huggingface-cli download tensorblock/firefly-mixtral-8x7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
78
+ ```
firefly-mixtral-8x7b-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf1f099c95c6e20caa841f5f951944cab8217325db93858c8fc2055add918509
3
+ size 17311229344
firefly-mixtral-8x7b-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7201703d3db586359275ddf1ffb9a67ac92bf32f8660d14f8923c1bb4d219e01
3
+ size 24169645472
firefly-mixtral-8x7b-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f458a3464ffe5cbabd5a3095af00c86b701019978c0ad4b998d4abc8922239e
3
+ size 22546449824
firefly-mixtral-8x7b-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5a53d737a49cdc5c82e6b14a864d76874b6b8cc172ed7f1a32aada9c358585f
3
+ size 20432520608
firefly-mixtral-8x7b-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:198ede8add7a2cf35595dd48d3927d9cde035f01c80530f3dd89a74afed93a27
3
+ size 26443589024
firefly-mixtral-8x7b-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71e7531e16b0e6bad880bce34fab147209ef393deccc4f8d4c41ca1baddbd5d0
3
+ size 28448466336
firefly-mixtral-8x7b-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f2d815eb46412df9eb0635f9423b5e175483f98900641639ff2b86cc625d16c
3
+ size 26745578912
firefly-mixtral-8x7b-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9da418a5c4269ce7c4c925849375cbad04f4e25983b0871be0f73876c614f8d0
3
+ size 32231335328
firefly-mixtral-8x7b-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cece9e7493252bbeba9849a15f6027ff2ebf56e6ab1a80f1080a11109b4e1ac
3
+ size 33229579680
firefly-mixtral-8x7b-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6195edddd2368c8f12de092a6bc185dadc4e5bde63ee6267f6952df70b65385e
3
+ size 32231335328
firefly-mixtral-8x7b-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:808a15cf0721e0347c1ed09ff13d67a2cd1a8d1a82f4a9a7daf49da77dc5f1bb
3
+ size 38380815776
firefly-mixtral-8x7b-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14bd6753e95b312ab511db36c7840ab28c4c23dc1fbd3c22d375ce6eea6d2ede
3
+ size 49626318240