Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,36 @@ tags:
|
|
5 |
- merge
|
6 |
|
7 |
---
|
8 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
11 |
|
|
|
5 |
- merge
|
6 |
|
7 |
---
|
8 |
+
# 0x01-7x8B-hf
|
9 |
+
|
10 |
+
here we go again. multi-step merge, various models involved at various ratios with various methods.
|
11 |
+
|
12 |
+
this thing came to me in a fever dream when I was hung over, but after slightly tweaking the recipe it turned out surprisingly decent. using with MixLibra settings.
|
13 |
+
|
14 |
+
## Constituent parts
|
15 |
+
```yaml
|
16 |
+
# primordial_slop_a:
|
17 |
+
- model: mistralai/Mixtral-8x7B-v0.1+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs
|
18 |
+
- model: mistralai/Mixtral-8x7B-v0.1+SeanWu25/Mixtral_8x7b_Medicine
|
19 |
+
- model: mistralai/Mixtral-8x7B-v0.1+SeanWu25/Mixtral_8x7b_WuKurtz
|
20 |
+
- model: mistralai/Mixtral-8x7B-v0.1+Epiculous/crunchy-onion-lora
|
21 |
+
- model: mistralai/Mixtral-8x7B-v0.1+maxkretchmer/gc-mixtral
|
22 |
+
# primordial_slop_b:
|
23 |
+
- model: Envoid/Mixtral-Instruct-ITR-8x7B
|
24 |
+
- model: crestf411/daybreak-mixtral-8x7b-v1.0-hf
|
25 |
+
- model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
|
26 |
+
- model: orangetin/OpenHermes-Mixtral-8x7B
|
27 |
+
- model: mistralai/Mixtral-8x7B-Instruct-v0.1+idegroup/PhyAssistant
|
28 |
+
- model: ycros/crunchy-onion-nx
|
29 |
+
- model: jondurbin/bagel-dpo-8x7b-v0.2
|
30 |
+
- model: amoldwalunj/Mixtral-8x7B-Instruct-v0.1-legal_finetune_mixtral_32k
|
31 |
+
# primordial_slop_c: a+b
|
32 |
+
# primordial_slop_d:
|
33 |
+
- model: Sao10K/Sensualize-Mixtral-bf16
|
34 |
+
- model: Envoid/Mixtral-Instruct-ITR-DADA-8x7B
|
35 |
+
```
|
36 |
+
|
37 |
+
# mergekit
|
38 |
|
39 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
40 |
|