morriszms commited on
Commit
20f3d49
·
verified ·
1 Parent(s): 153bc6f

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Mixtral_AI_Cyber_2.0-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Mixtral_AI_Cyber_2.0-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Mixtral_AI_Cyber_2.0-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Mixtral_AI_Cyber_2.0-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Mixtral_AI_Cyber_2.0-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Mixtral_AI_Cyber_2.0-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Mixtral_AI_Cyber_2.0-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Mixtral_AI_Cyber_2.0-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Mixtral_AI_Cyber_2.0-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Mixtral_AI_Cyber_2.0-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Mixtral_AI_Cyber_2.0-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Mixtral_AI_Cyber_2.0-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Mixtral_AI_Cyber_2.0-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afb8458699fbd2c3da7eb23d88e7634881c41f4e868ff5756fa92a9ee3790e52
3
+ size 2719242976
Mixtral_AI_Cyber_2.0-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12d9b5664e388db30ad45a4b2fd1d0017899f15085a0b6727cf7d5286e746d75
3
+ size 3822025440
Mixtral_AI_Cyber_2.0-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78038475d55661bedf4cd2c69cbaa67770c65f3b2579bd80b7df0bce60c6ecbd
3
+ size 3518986976
Mixtral_AI_Cyber_2.0-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:696d9d996ed531b1222969033488ea212c44158a0b8a442f34fd9b4a5b323b3a
3
+ size 3164568288
Mixtral_AI_Cyber_2.0-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73ccbf09bd1be9b7cd682b9678bc5cb33763d3f02058614d9259dcc8616c574a
3
+ size 4108917472
Mixtral_AI_Cyber_2.0-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3393633fdf7cbdae0af3a7bfb6c16251ce85aea39654e542b24959b503c47af9
3
+ size 4368440032
Mixtral_AI_Cyber_2.0-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ea1cb90e28b0374f7c482559101bdf2f2ded4e78fc1122ad19a40fe9d53eb43
3
+ size 4140374752
Mixtral_AI_Cyber_2.0-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f956b567596cc01c3f5c53c6b7d013ffa7c6c25de8bd0908fa28c896f36365f
3
+ size 4997716704
Mixtral_AI_Cyber_2.0-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db07e4f4a866fa6274bf190731459fc150e9209c33fb47e5d328f70aaa868b65
3
+ size 5131410144
Mixtral_AI_Cyber_2.0-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f07250b940e8f12c0e6a6bbb0f4bc56853322e2f628b5a94adb133e138071185
3
+ size 4997716704
Mixtral_AI_Cyber_2.0-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6a940903ccadfc9a919dd0830ccdf9f8cb8ed87dd6745e8b136c329252b131a
3
+ size 5942065888
Mixtral_AI_Cyber_2.0-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58d417a05c8e05fa21463d9bae4604b84e259b7122436d00f5ff11caf35d76f6
3
+ size 7695858400
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: LeroyDyer/Mixtral_AI_Cyber_2.0
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+ - 128k_Context
8
+ - chemistry
9
+ - biology
10
+ - music
11
+ - code
12
+ - medical
13
+ - text-generation-inference
14
+ - Cyber-Series
15
+ - TensorBlock
16
+ - GGUF
17
+ previous_Merges:
18
+ - rvv-karma/BASH-Coder-Mistral-7B
19
+ - Locutusque/Hercules-3.1-Mistral-7B
20
+ - KoboldAI/Mistral-7B-Erebus-v3 - NSFW
21
+ - Locutusque/Hyperion-2.1-Mistral-7B
22
+ - Severian/Nexus-IKM-Mistral-7B-Pytorch
23
+ - NousResearch/Hermes-2-Pro-Mistral-7B
24
+ - mistralai/Mistral-7B-Instruct-v0.2
25
+ - Nitral-AI/ProdigyXBioMistral_7B
26
+ - Nitral-AI/Infinite-Mika-7b
27
+ - Nous-Yarn-Mistral-7b-128k
28
+ - yanismiraoui/Yarn-Mistral-7b-128k-sharded
29
+ license: apache-2.0
30
+ language:
31
+ - en
32
+ metrics:
33
+ - accuracy
34
+ - brier_score
35
+ - code_eval
36
+ pipeline_tag: text-generation
37
+ ---
38
+
39
+ <div style="width: auto; margin-left: auto; margin-right: auto">
40
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
41
+ </div>
42
+ <div style="display: flex; justify-content: space-between; width: 100%;">
43
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
44
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
45
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
46
+ </p>
47
+ </div>
48
+ </div>
49
+
50
+ ## LeroyDyer/Mixtral_AI_Cyber_2.0 - GGUF
51
+
52
+ This repo contains GGUF format model files for [LeroyDyer/Mixtral_AI_Cyber_2.0](https://huggingface.co/LeroyDyer/Mixtral_AI_Cyber_2.0).
53
+
54
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
55
+
56
+ <div style="text-align: left; margin: 20px 0;">
57
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
58
+ Run them on the TensorBlock client using your local machine ↗
59
+ </a>
60
+ </div>
61
+
62
+ ## Prompt template
63
+
64
+ ```
65
+
66
+ ```
67
+
68
+ ## Model file specification
69
+
70
+ | Filename | Quant type | File Size | Description |
71
+ | -------- | ---------- | --------- | ----------- |
72
+ | [Mixtral_AI_Cyber_2.0-Q2_K.gguf](https://huggingface.co/tensorblock/Mixtral_AI_Cyber_2.0-GGUF/blob/main/Mixtral_AI_Cyber_2.0-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
73
+ | [Mixtral_AI_Cyber_2.0-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mixtral_AI_Cyber_2.0-GGUF/blob/main/Mixtral_AI_Cyber_2.0-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
74
+ | [Mixtral_AI_Cyber_2.0-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mixtral_AI_Cyber_2.0-GGUF/blob/main/Mixtral_AI_Cyber_2.0-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
75
+ | [Mixtral_AI_Cyber_2.0-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mixtral_AI_Cyber_2.0-GGUF/blob/main/Mixtral_AI_Cyber_2.0-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
76
+ | [Mixtral_AI_Cyber_2.0-Q4_0.gguf](https://huggingface.co/tensorblock/Mixtral_AI_Cyber_2.0-GGUF/blob/main/Mixtral_AI_Cyber_2.0-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
77
+ | [Mixtral_AI_Cyber_2.0-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mixtral_AI_Cyber_2.0-GGUF/blob/main/Mixtral_AI_Cyber_2.0-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
78
+ | [Mixtral_AI_Cyber_2.0-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mixtral_AI_Cyber_2.0-GGUF/blob/main/Mixtral_AI_Cyber_2.0-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
79
+ | [Mixtral_AI_Cyber_2.0-Q5_0.gguf](https://huggingface.co/tensorblock/Mixtral_AI_Cyber_2.0-GGUF/blob/main/Mixtral_AI_Cyber_2.0-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
80
+ | [Mixtral_AI_Cyber_2.0-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mixtral_AI_Cyber_2.0-GGUF/blob/main/Mixtral_AI_Cyber_2.0-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
81
+ | [Mixtral_AI_Cyber_2.0-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mixtral_AI_Cyber_2.0-GGUF/blob/main/Mixtral_AI_Cyber_2.0-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
82
+ | [Mixtral_AI_Cyber_2.0-Q6_K.gguf](https://huggingface.co/tensorblock/Mixtral_AI_Cyber_2.0-GGUF/blob/main/Mixtral_AI_Cyber_2.0-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
83
+ | [Mixtral_AI_Cyber_2.0-Q8_0.gguf](https://huggingface.co/tensorblock/Mixtral_AI_Cyber_2.0-GGUF/blob/main/Mixtral_AI_Cyber_2.0-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
84
+
85
+
86
+ ## Downloading instruction
87
+
88
+ ### Command line
89
+
90
+ Firstly, install Huggingface Client
91
+
92
+ ```shell
93
+ pip install -U "huggingface_hub[cli]"
94
+ ```
95
+
96
+ Then, downoad the individual model file the a local directory
97
+
98
+ ```shell
99
+ huggingface-cli download tensorblock/Mixtral_AI_Cyber_2.0-GGUF --include "Mixtral_AI_Cyber_2.0-Q2_K.gguf" --local-dir MY_LOCAL_DIR
100
+ ```
101
+
102
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
103
+
104
+ ```shell
105
+ huggingface-cli download tensorblock/Mixtral_AI_Cyber_2.0-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
106
+ ```