TheBloke commited on
Commit
19ecd6a
1 Parent(s): 7ec748e

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -1
README.md CHANGED
@@ -5,6 +5,18 @@ license: other
5
  model_creator: Zaraki Quem Parte
6
  model_name: Kuchiki L2 7B
7
  model_type: llama
 
 
 
 
 
 
 
 
 
 
 
 
8
  quantized_by: TheBloke
9
  tags:
10
  - llama2
@@ -58,6 +70,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
58
  <!-- repositories-available start -->
59
  ## Repositories available
60
 
 
61
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GPTQ)
62
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF)
63
  * [Zaraki Quem Parte's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/zarakiquemparte/kuchiki-l2-7b)
@@ -132,6 +145,63 @@ Refer to the Provided Files table below to see what files use which methods, and
132
 
133
  <!-- README_GGUF.md-provided-files end -->
134
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
135
  <!-- README_GGUF.md-how-to-run start -->
136
  ## Example `llama.cpp` command
137
 
@@ -217,7 +287,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
217
 
218
  **Special thanks to**: Aemon Algiz.
219
 
220
- **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
221
 
222
 
223
  Thank you to all my generous patrons and donaters!
@@ -236,6 +306,10 @@ This merge of models(hermes and airoboros) was done with this [script](https://g
236
 
237
  This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
238
 
 
 
 
 
239
  Merge illustration:
240
 
241
  ![illustration](merge-illustration.png)
 
5
  model_creator: Zaraki Quem Parte
6
  model_name: Kuchiki L2 7B
7
  model_type: llama
8
+ prompt_template: 'Below is an instruction that describes a task. Write a response
9
+ that appropriately completes the request.
10
+
11
+
12
+ ### Instruction:
13
+
14
+ {prompt}
15
+
16
+
17
+ ### Response:
18
+
19
+ '
20
  quantized_by: TheBloke
21
  tags:
22
  - llama2
 
70
  <!-- repositories-available start -->
71
  ## Repositories available
72
 
73
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Kuchiki-L2-7B-AWQ)
74
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GPTQ)
75
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF)
76
  * [Zaraki Quem Parte's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/zarakiquemparte/kuchiki-l2-7b)
 
145
 
146
  <!-- README_GGUF.md-provided-files end -->
147
 
148
+ <!-- README_GGUF.md-how-to-download start -->
149
+ ## How to download GGUF files
150
+
151
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
152
+
153
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
154
+ - LM Studio
155
+ - LoLLMS Web UI
156
+ - Faraday.dev
157
+
158
+ ### In `text-generation-webui`
159
+
160
+ Under Download Model, you can enter the model repo: TheBloke/Kuchiki-L2-7B-GGUF and below it, a specific filename to download, such as: kuchiki-l2-7b.q4_K_M.gguf.
161
+
162
+ Then click Download.
163
+
164
+ ### On the command line, including multiple files at once
165
+
166
+ I recommend using the `huggingface-hub` Python library:
167
+
168
+ ```shell
169
+ pip3 install huggingface-hub>=0.17.1
170
+ ```
171
+
172
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
173
+
174
+ ```shell
175
+ huggingface-cli download TheBloke/Kuchiki-L2-7B-GGUF kuchiki-l2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
176
+ ```
177
+
178
+ <details>
179
+ <summary>More advanced huggingface-cli download usage</summary>
180
+
181
+ You can also download multiple files at once with a pattern:
182
+
183
+ ```shell
184
+ huggingface-cli download TheBloke/Kuchiki-L2-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
185
+ ```
186
+
187
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
188
+
189
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
190
+
191
+ ```shell
192
+ pip3 install hf_transfer
193
+ ```
194
+
195
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
196
+
197
+ ```shell
198
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Kuchiki-L2-7B-GGUF kuchiki-l2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
199
+ ```
200
+
201
+ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
202
+ </details>
203
+ <!-- README_GGUF.md-how-to-download end -->
204
+
205
  <!-- README_GGUF.md-how-to-run start -->
206
  ## Example `llama.cpp` command
207
 
 
287
 
288
  **Special thanks to**: Aemon Algiz.
289
 
290
+ **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
291
 
292
 
293
  Thank you to all my generous patrons and donaters!
 
306
 
307
  This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
308
 
309
+ Quantized Model by @TheBloke:
310
+ - [GGUF](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF)
311
+ - [GPTQ](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GPTQ)
312
+
313
  Merge illustration:
314
 
315
  ![illustration](merge-illustration.png)