Update README.md
Browse files
README.md
CHANGED
@@ -69,7 +69,7 @@ Data augmentation techniques were used to grant grammatical, syntactical correct
|
|
69 |
|
70 |
SauerkrautLM-7b-HerO was merged on 1 A100 with [mergekit](https://github.com/cg123/mergekit).
|
71 |
The merged model contains [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca).
|
72 |
-
We applied the gradient
|
73 |
|
74 |
|
75 |
|
|
|
69 |
|
70 |
SauerkrautLM-7b-HerO was merged on 1 A100 with [mergekit](https://github.com/cg123/mergekit).
|
71 |
The merged model contains [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca).
|
72 |
+
We applied the gradient SLERP method.
|
73 |
|
74 |
|
75 |
|