Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,8 @@ base_model:
|
|
19 |
|
20 |
The finetunes used in this merge saw several hundreds of millions of tokens of instruction data. The merge was then healed on 150 million tokens of roleplaying data. A Kahneman-Tversky Optimization was applied to the healed model to give it a unique output style.
|
21 |
|
|
|
|
|
22 |
Developed by **Aura Industries**, with contributions from **Anthracite Org**
|
23 |
|
24 |
## Model Details
|
|
|
19 |
|
20 |
The finetunes used in this merge saw several hundreds of millions of tokens of instruction data. The merge was then healed on 150 million tokens of roleplaying data. A Kahneman-Tversky Optimization was applied to the healed model to give it a unique output style.
|
21 |
|
22 |
+
By the numbers, this should be a direct improvement over **[Aura-MoE-2x4B](https://huggingface.co/AuraIndustries/Aura-MoE-2x4B)**
|
23 |
+
|
24 |
Developed by **Aura Industries**, with contributions from **Anthracite Org**
|
25 |
|
26 |
## Model Details
|