TheBloke commited on
Commit
d25fd12
·
verified ·
1 Parent(s): b63ab4e

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +507 -0
README.md ADDED
@@ -0,0 +1,507 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: KoboldAI/LLaMA2-13B-Estopia
3
+ inference: false
4
+ license: cc-by-nc-4.0
5
+ model_creator: KoboldAI
6
+ model_name: Llama2 13B Estopia
7
+ model_type: llama
8
+ prompt_template: 'Below is an instruction that describes a task. Write a response
9
+ that appropriately completes the request.
10
+
11
+
12
+ ### Instruction:
13
+
14
+ {prompt}
15
+
16
+
17
+ ### Response:
18
+
19
+ '
20
+ quantized_by: TheBloke
21
+ tags:
22
+ - mergekit
23
+ - merge
24
+ ---
25
+ <!-- markdownlint-disable MD041 -->
26
+
27
+ <!-- header start -->
28
+ <!-- 200823 -->
29
+ <div style="width: auto; margin-left: auto; margin-right: auto">
30
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
31
+ </div>
32
+ <div style="display: flex; justify-content: space-between; width: 100%;">
33
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
34
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
35
+ </div>
36
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
37
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
38
+ </div>
39
+ </div>
40
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
41
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
42
+ <!-- header end -->
43
+
44
+ # Llama2 13B Estopia - AWQ
45
+ - Model creator: [KoboldAI](https://huggingface.co/KoboldAI)
46
+ - Original model: [Llama2 13B Estopia](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia)
47
+
48
+ <!-- description start -->
49
+ ## Description
50
+
51
+ This repo contains AWQ model files for [KoboldAI's Llama2 13B Estopia](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia).
52
+
53
+ These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
54
+
55
+
56
+ ### About AWQ
57
+
58
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
59
+
60
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
61
+
62
+ It is supported by:
63
+
64
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
65
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
66
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
67
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
68
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
69
+
70
+ <!-- description end -->
71
+ <!-- repositories-available start -->
72
+ ## Repositories available
73
+
74
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-AWQ)
75
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GPTQ)
76
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF)
77
+ * [KoboldAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia)
78
+ <!-- repositories-available end -->
79
+
80
+ <!-- prompt-template start -->
81
+ ## Prompt template: Alpaca
82
+
83
+ ```
84
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
85
+
86
+ ### Instruction:
87
+ {prompt}
88
+
89
+ ### Response:
90
+
91
+ ```
92
+
93
+ <!-- prompt-template end -->
94
+ <!-- licensing start -->
95
+ ## Licensing
96
+
97
+ The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
98
+
99
+ As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
100
+
101
+ In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [KoboldAI's Llama2 13B Estopia](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia).
102
+ <!-- licensing end -->
103
+ <!-- README_AWQ.md-provided-files start -->
104
+ ## Provided files, and AWQ parameters
105
+
106
+ I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
107
+
108
+ Models are released as sharded safetensors files.
109
+
110
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
111
+ | ------ | ---- | -- | ----------- | ------- | ---- |
112
+ | [main](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | Processing, coming soon
113
+
114
+ <!-- README_AWQ.md-provided-files end -->
115
+
116
+ <!-- README_AWQ.md-text-generation-webui start -->
117
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
118
+
119
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
120
+
121
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
122
+
123
+ 1. Click the **Model tab**.
124
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/LLaMA2-13B-Estopia-AWQ`.
125
+ 3. Click **Download**.
126
+ 4. The model will start downloading. Once it's finished it will say "Done".
127
+ 5. In the top left, click the refresh icon next to **Model**.
128
+ 6. In the **Model** dropdown, choose the model you just downloaded: `LLaMA2-13B-Estopia-AWQ`
129
+ 7. Select **Loader: AutoAWQ**.
130
+ 8. Click Load, and the model will load and is now ready for use.
131
+ 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
132
+ 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
133
+ <!-- README_AWQ.md-text-generation-webui end -->
134
+
135
+ <!-- README_AWQ.md-use-from-vllm start -->
136
+ ## Multi-user inference server: vLLM
137
+
138
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
139
+
140
+ - Please ensure you are using vLLM version 0.2 or later.
141
+ - When using vLLM as a server, pass the `--quantization awq` parameter.
142
+
143
+ For example:
144
+
145
+ ```shell
146
+ python3 -m vllm.entrypoints.api_server --model TheBloke/LLaMA2-13B-Estopia-AWQ --quantization awq --dtype auto
147
+ ```
148
+
149
+ - When using vLLM from Python code, again set `quantization=awq`.
150
+
151
+ For example:
152
+
153
+ ```python
154
+ from vllm import LLM, SamplingParams
155
+
156
+ prompts = [
157
+ "Tell me about AI",
158
+ "Write a story about llamas",
159
+ "What is 291 - 150?",
160
+ "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
161
+ ]
162
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
163
+
164
+ ### Instruction:
165
+ {prompt}
166
+
167
+ ### Response:
168
+ '''
169
+
170
+ prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
171
+
172
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
173
+
174
+ llm = LLM(model="TheBloke/LLaMA2-13B-Estopia-AWQ", quantization="awq", dtype="auto")
175
+
176
+ outputs = llm.generate(prompts, sampling_params)
177
+
178
+ # Print the outputs.
179
+ for output in outputs:
180
+ prompt = output.prompt
181
+ generated_text = output.outputs[0].text
182
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
183
+ ```
184
+ <!-- README_AWQ.md-use-from-vllm start -->
185
+
186
+ <!-- README_AWQ.md-use-from-tgi start -->
187
+ ## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
188
+
189
+ Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
190
+
191
+ Example Docker parameters:
192
+
193
+ ```shell
194
+ --model-id TheBloke/LLaMA2-13B-Estopia-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
195
+ ```
196
+
197
+ Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
198
+
199
+ ```shell
200
+ pip3 install huggingface-hub
201
+ ```
202
+
203
+ ```python
204
+ from huggingface_hub import InferenceClient
205
+
206
+ endpoint_url = "https://your-endpoint-url-here"
207
+
208
+ prompt = "Tell me about AI"
209
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
210
+
211
+ ### Instruction:
212
+ {prompt}
213
+
214
+ ### Response:
215
+ '''
216
+
217
+ client = InferenceClient(endpoint_url)
218
+ response = client.text_generation(prompt,
219
+ max_new_tokens=128,
220
+ do_sample=True,
221
+ temperature=0.7,
222
+ top_p=0.95,
223
+ top_k=40,
224
+ repetition_penalty=1.1)
225
+
226
+ print(f"Model output: ", response)
227
+ ```
228
+ <!-- README_AWQ.md-use-from-tgi end -->
229
+
230
+ <!-- README_AWQ.md-use-from-python start -->
231
+ ## Inference from Python code using Transformers
232
+
233
+ ### Install the necessary packages
234
+
235
+ - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
236
+ - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
237
+
238
+ ```shell
239
+ pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
240
+ ```
241
+
242
+ Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
243
+
244
+ If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
245
+
246
+ ```shell
247
+ pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
248
+ ```
249
+
250
+ If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
251
+
252
+ ```shell
253
+ pip3 uninstall -y autoawq
254
+ git clone https://github.com/casper-hansen/AutoAWQ
255
+ cd AutoAWQ
256
+ pip3 install .
257
+ ```
258
+
259
+ ### Transformers example code (requires Transformers 4.35.0 and later)
260
+
261
+ ```python
262
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
263
+
264
+ model_name_or_path = "TheBloke/LLaMA2-13B-Estopia-AWQ"
265
+
266
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
267
+ model = AutoModelForCausalLM.from_pretrained(
268
+ model_name_or_path,
269
+ low_cpu_mem_usage=True,
270
+ device_map="cuda:0"
271
+ )
272
+
273
+ # Using the text streamer to stream output one token at a time
274
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
275
+
276
+ prompt = "Tell me about AI"
277
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
278
+
279
+ ### Instruction:
280
+ {prompt}
281
+
282
+ ### Response:
283
+ '''
284
+
285
+ # Convert prompt to tokens
286
+ tokens = tokenizer(
287
+ prompt_template,
288
+ return_tensors='pt'
289
+ ).input_ids.cuda()
290
+
291
+ generation_params = {
292
+ "do_sample": True,
293
+ "temperature": 0.7,
294
+ "top_p": 0.95,
295
+ "top_k": 40,
296
+ "max_new_tokens": 512,
297
+ "repetition_penalty": 1.1
298
+ }
299
+
300
+ # Generate streamed output, visible one token at a time
301
+ generation_output = model.generate(
302
+ tokens,
303
+ streamer=streamer,
304
+ **generation_params
305
+ )
306
+
307
+ # Generation without a streamer, which will include the prompt in the output
308
+ generation_output = model.generate(
309
+ tokens,
310
+ **generation_params
311
+ )
312
+
313
+ # Get the tokens from the output, decode them, print them
314
+ token_output = generation_output[0]
315
+ text_output = tokenizer.decode(token_output)
316
+ print("model.generate output: ", text_output)
317
+
318
+ # Inference is also possible via Transformers' pipeline
319
+ from transformers import pipeline
320
+
321
+ pipe = pipeline(
322
+ "text-generation",
323
+ model=model,
324
+ tokenizer=tokenizer,
325
+ **generation_params
326
+ )
327
+
328
+ pipe_output = pipe(prompt_template)[0]['generated_text']
329
+ print("pipeline output: ", pipe_output)
330
+
331
+ ```
332
+ <!-- README_AWQ.md-use-from-python end -->
333
+
334
+ <!-- README_AWQ.md-compatibility start -->
335
+ ## Compatibility
336
+
337
+ The files provided are tested to work with:
338
+
339
+ - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
340
+ - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
341
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
342
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
343
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
344
+
345
+ <!-- README_AWQ.md-compatibility end -->
346
+
347
+ <!-- footer start -->
348
+ <!-- 200823 -->
349
+ ## Discord
350
+
351
+ For further support, and discussions on these models and AI in general, join us at:
352
+
353
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
354
+
355
+ ## Thanks, and how to contribute
356
+
357
+ Thanks to the [chirper.ai](https://chirper.ai) team!
358
+
359
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
360
+
361
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
362
+
363
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
364
+
365
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
366
+
367
+ * Patreon: https://patreon.com/TheBlokeAI
368
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
369
+
370
+ **Special thanks to**: Aemon Algiz.
371
+
372
+ **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
373
+
374
+
375
+ Thank you to all my generous patrons and donaters!
376
+
377
+ And thank you again to a16z for their generous grant.
378
+
379
+ <!-- footer end -->
380
+
381
+ # Original model card: KoboldAI's Llama2 13B Estopia
382
+
383
+ # Introduction
384
+ - Estopia is a model focused on improving the dialogue and prose returned when using the instruct format. As a side benefit, character cards and similar seem to have also improved, remembering details well in many cases.
385
+ - It focuses on "guided narratives" - using instructions to guide or explore fictional stories, where you act as a guide for the AI to narrate and fill in the details.
386
+ - It has primarily been tested around prose, using instructions to guide narrative, detail retention and "neutrality" - in particular with regards to plot armour. Unless you define different rules for your adventure / narrative with instructions, it should be realistic in the responses provided.
387
+ - It has been tested using different modes, such as instruct, chat, adventure and story modes - and should be able to do them all to a degree, with it's strengths being instruct and adventure, with story being a close second.
388
+ # Usage
389
+ - The Estopia model has been tested primarily using the Alpaca format, but with the range of models included likely has some understanding of others. Some examples of tested formats are below:
390
+ - ```\n### Instruction:\nWhat colour is the sky?\n### Response:\nThe sky is...```
391
+ - ```<Story text>\n***\nWrite a summary of the text above\n***\nThe story starts by...```
392
+ - Using the Kobold Lite AI adventure mode
393
+ - ```User:Hello there!\nAssistant:Good morning...\n```
394
+ - For settings, the following are recommended for general use:
395
+ - Temperature: 0.8-1.2
396
+ - Min P: 0.05-0.1
397
+ - Max P: 0.92, or 1 if using a Min P greater than 0
398
+ - Top K: 0
399
+ - Response length: Higher than your usual amount most likely - for example a common value selected is 512.
400
+ - Note: Response lengths are not guaranteed to always be this length. On occasion, responses may be shorter if they convey the response entirely, other times they could be upwards of this value. It depends mostly on the character card, instructions, etc.
401
+ - Rep Pen: 1.1
402
+ - Rep Pen Range: 2 or 3x your response length
403
+ - Stopping tokens (Not needed, but can help if the AI is writing too much):
404
+ - ```##||$||---||$||ASSISTANT:||$||[End||$||</s>``` - A single string for Kobold Lite combining the ones below
405
+ - ```##```
406
+ - ```---```
407
+ - ```ASSISTANT:```
408
+ - ```[End```
409
+ - ```</s>```
410
+ - The settings above should provide a generally good experience balancing instruction following and creativity. Generally the higher you set the temperature, the greater the creativity and higher chance of logical errors when providing responses from the AI.
411
+ # Recipe
412
+ This model was made in three stages, along with many experimental stages which will be skipped for brevity. The first was internally referred to as EstopiaV9, which has a high degree of instruction following and creativity in responses, though they were generally shorter and a little more restricted in the scope of outputs, but conveyed nuance better.
413
+ ```yaml
414
+ merge_method: task_arithmetic
415
+ base_model: TheBloke/Llama-2-13B-fp16
416
+ models:
417
+ - model: TheBloke/Llama-2-13B-fp16
418
+ - model: Undi95/UtopiaXL-13B
419
+ parameters:
420
+ weight: 1.0
421
+ - model: Doctor-Shotgun/cat-v1.0-13b
422
+ parameters:
423
+ weight: 0.02
424
+ - model: PygmalionAI/mythalion-13b
425
+ parameters:
426
+ weight: 0.10
427
+ - model: Undi95/Emerhyst-13B
428
+ parameters:
429
+ weight: 0.05
430
+ - model: CalderaAI/13B-Thorns-l2
431
+ parameters:
432
+ weight: 0.05
433
+ - model: KoboldAI/LLaMA2-13B-Tiefighter
434
+ parameters:
435
+ weight: 0.20
436
+ dtype: float16
437
+ ```
438
+ The second part of the merge was known as EstopiaV13. This produced responses which were long, but tended to write beyond good stopping points for further instructions to be added as it leant heavily on novel style prose. It did however benefit from a greater degree of neutrality as described above, and retained many of the detail tracking abilities of V9.
439
+ ```yaml
440
+ merge_method: task_arithmetic
441
+ base_model: TheBloke/Llama-2-13B-fp16
442
+ models:
443
+ - model: TheBloke/Llama-2-13B-fp16
444
+ - model: Undi95/UtopiaXL-13B
445
+ parameters:
446
+ weight: 1.0
447
+ - model: Doctor-Shotgun/cat-v1.0-13b
448
+ parameters:
449
+ weight: 0.01
450
+ - model: chargoddard/rpguild-chatml-13b
451
+ parameters:
452
+ weight: 0.02
453
+ - model: PygmalionAI/mythalion-13b
454
+ parameters:
455
+ weight: 0.08
456
+ - model: CalderaAI/13B-Thorns-l2
457
+ parameters:
458
+ weight: 0.02
459
+ - model: KoboldAI/LLaMA2-13B-Tiefighter
460
+ parameters:
461
+ weight: 0.20
462
+ dtype: float16
463
+ ```
464
+ The third step was a merge between the two to retain the benefits of both as much as possible. This was performed using the dare merging technique.
465
+ ```yaml
466
+ # task-arithmetic style
467
+ models:
468
+ - model: EstopiaV9
469
+ parameters:
470
+ weight: 1
471
+ density: 1
472
+ - model: EstopiaV13
473
+ parameters:
474
+ weight: 0.05
475
+ density: 0.30
476
+ merge_method: dare_ties
477
+ base_model: TheBloke/Llama-2-13B-fp16
478
+ parameters:
479
+ int8_mask: true
480
+ dtype: bfloat16
481
+ ```
482
+ # Model selection
483
+ - Undi95/UtopiaXL-13B
484
+ - Solid all around base for models, with the ability to write longer responses and generally good retension to detail.
485
+ - Doctor-Shotgun/cat-v1.0-13b
486
+ - A medical focused model which is added to focus a little more on the human responses, such as for psycology.
487
+ - PygmalionAI/mythalion-13b
488
+ - A roleplay and instruct focused model, which improves attentiveness to character card details and the variety of responses
489
+ - Undi95/Emerhyst-13B
490
+ - A roleplay but also longer form response model. It can be quite variable, but helps add to the depth and possible options the AI can respond with during narratives.
491
+ - CalderaAI/13B-Thorns-l2
492
+ - A neutral and very attentive model. It is good at chat and following instructions, which help benefit these modes.
493
+ - KoboldAI/LLaMA2-13B-Tiefighter
494
+ - A solid all around model, focusing on story writing and adventure modes. It provides all around benefits to creativity and the prose in models, along with adventure mode support.
495
+ - chargoddard/rpguild-chatml-13b
496
+ - A roleplay model, which introduces new data and also improves the detail retention in longer narratives.
497
+ # Notes
498
+ - With the differing models inside, this model will not have perfect end of sequence tokens which is a problem many merges can share. While attempts have been made to minimise this, you may occasionally get oddly behaving tokens - this should be possible to resolve with a quick manual edit once and the model should pick up on it.
499
+ - Chat is one of the least tested areas for this model. It works fairly well, but it can be quite character card dependant.
500
+ - This is a narrative and prose focused model. As a result, it can and will talk for you if guided to do so (such as asking it to act as a co-author or narrator) within instructions or other contexts. This can be mitigated mostly by adding instructions to limit this, or using chat mode instead.
501
+ # Future areas
502
+ - Llava
503
+ - Some success has been had with merging the llava lora on this. While no in depth testing has been performed, more narrative responses based on the images could be obtained - though there were drawbacks in the form of degraded performance in other areas, and hallucinations due to the fictional focus of this model.
504
+ - Stheno
505
+ - A merge which has similar promise from Sao. Some merge attempts have been made between the two and were promising, but not entirely consistent at the moment. With some possible refinement, this could produce an even stronger model.
506
+ - DynamicFactor
507
+ - All the merges used have been based on llama two in this merge, but a dare merge with dynamic factor (an attempted refinement of llama two) showed a beneficial improvement to the instruction abilities of the model, along with lengthy responses. It lost a little of the variety of responses, so perhaps if a balance of it could be added the instruction abilities and reasoning could be improved even further.