Update README.md
Browse files
README.md
CHANGED
@@ -13,21 +13,18 @@ It is hierarchical SLERP merged from the following models
|
|
13 |
* teknium/OpenHermes-2.5-Mistral-7B (Apache 2.0)
|
14 |
* Intel/neural-chat-7b-v3-3 (Apache 2.0)
|
15 |
* meta-math/MetaMath-Mistral-7B (Apache 2.0)
|
16 |
-
* openchat/openchat-3.5-1210 (Apache 2.0)
|
17 |
|
18 |
Here's how we did the hierarchical SLERP merge.
|
19 |
```
|
20 |
[flux-base-optimized]
|
21 |
↑
|
22 |
-
[mistral]
|
23 |
|
|
24 |
[stage-1]-+-[openchat]
|
25 |
↑
|
26 |
-
[mistral]
|
27 |
|
|
28 |
[stage-0]-+-[meta-math]
|
29 |
↑
|
30 |
-
[mistral]
|
31 |
|
|
32 |
[openhermes]-+-[neural-chat]
|
33 |
```
|
|
|
13 |
* teknium/OpenHermes-2.5-Mistral-7B (Apache 2.0)
|
14 |
* Intel/neural-chat-7b-v3-3 (Apache 2.0)
|
15 |
* meta-math/MetaMath-Mistral-7B (Apache 2.0)
|
16 |
+
* openchat/openchat-3.5-0106 was openchat/openchat-3.5-1210 (Apache 2.0)
|
17 |
|
18 |
Here's how we did the hierarchical SLERP merge.
|
19 |
```
|
20 |
[flux-base-optimized]
|
21 |
↑
|
|
|
22 |
|
|
23 |
[stage-1]-+-[openchat]
|
24 |
↑
|
|
|
25 |
|
|
26 |
[stage-0]-+-[meta-math]
|
27 |
↑
|
|
|
28 |
|
|
29 |
[openhermes]-+-[neural-chat]
|
30 |
```
|