Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ language:
|
|
24 |
|
25 |
## Model description
|
26 |
|
27 |
-
Nous Hermes 2 Mixtral
|
28 |
|
29 |
The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
|
30 |
|
|
|
24 |
|
25 |
## Model description
|
26 |
|
27 |
+
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1).
|
28 |
|
29 |
The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
|
30 |
|