Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- llama
|
4 |
+
- alpaca
|
5 |
+
- cot
|
6 |
+
- vicuna
|
7 |
+
- uncensored
|
8 |
+
- merge
|
9 |
+
- mix
|
10 |
+
---
|
11 |
+
|
12 |
+
## 13B-BlueMethod
|
13 |
+
|
14 |
+
## Composition:
|
15 |
+
BlueMethod is a bit of a convoluted experiment in tiered merging.
|
16 |
+
Furthering the experimental nature of the merge, the models combined
|
17 |
+
were done so with a custom script that randomized the percent of each
|
18 |
+
layer merged from one model to the next. This is a warmup for a larger
|
19 |
+
project.
|
20 |
+
[Tier One and Two Merges not released; internal naming convention]
|
21 |
+
|
22 |
+
Tier One Merges:
|
23 |
+
13B-Metharme+13B-Nous-Hermes=13B-Methermes
|
24 |
+
13B-Vicuna-cocktail+13B-Manticore=13B-Vicortia
|
25 |
+
13B-HyperMantis+13B-Alpacino=13B-PsychoMantis
|
26 |
+
|
27 |
+
Tier Two Merges:
|
28 |
+
13B-Methermes+13B-Vicortia=13B-Methphistopheles
|
29 |
+
13B-PsychoMantis+13B-BlueMoonRP=13B-BlueMantis
|
30 |
+
|
31 |
+
Tier Three Merge:
|
32 |
+
13B-Methphistopheles+13B-BlueMantis=13B-BlueMethod
|
33 |
+
|
34 |
+
## Use:
|
35 |
+
Multiple instruct models and model composites were combined to make the final resulting model;
|
36 |
+
This model is highly open to experimental prompting, both Alpaca and Vicuna instruct can be used.
|
37 |
+
It can have interesting results.
|
38 |
+
|
39 |
+
## Language Models and LoRAs Used Credits:
|
40 |
+
|
41 |
+
13B-Metharme by PygmalionAI
|
42 |
+
|
43 |
+
https://www.huggingface.co/PygmalionAI/metharme-13b
|
44 |
+
|
45 |
+
13B-Nous-Hermes by NousResearch
|
46 |
+
|
47 |
+
https://www.huggingface.co/NousResearch/Nous-Hermes-13b
|
48 |
+
|
49 |
+
13B-Vicuna-cocktail by reeducator
|
50 |
+
|
51 |
+
https://www.huggingface.co/reeducator/vicuna-13b-cocktail
|
52 |
+
|
53 |
+
13B-Manticore by openaccess-ai-collective
|
54 |
+
|
55 |
+
https://www.huggingface.co/openaccess-ai-collective/manticore-13b
|
56 |
+
|
57 |
+
13B-HyperMantis and 13B-Alpacino by Digitous
|
58 |
+
|
59 |
+
https://huggingface.co/digitous/13B-HyperMantis
|
60 |
+
https://huggingface.co/digitous/Alpacino13b
|
61 |
+
|
62 |
+
Also thanks to Meta for LLaMA.
|
63 |
+
|
64 |
+
Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
|
65 |
+
Thanks to each and every one of you for your incredible work developing some of the best things
|
66 |
+
to come out of this community.
|