Update README.md
Browse files
README.md
CHANGED
@@ -9,9 +9,9 @@ Everyone-Coder-4x7b
|
|
9 |
|
10 |
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/ECrHQnZnv8UM9GUCQtlWW.jpeg)
|
11 |
|
12 |
-
EveryoneLLM series of models are a new Mixtral type model created using experts that were finetuned by the community, for the community. This is the first model to release in the series and it is a coding specific model. EveryoneLLM which will be a more generalized model will be released in the near future after more work is done to fine tune the process of
|
13 |
|
14 |
-
The goal of the EveryoneLLM series of models is to be a replacement or an alternative to Mixtral-8x7b that is more suitable for general and specific use, as well as easier to fine tune. Since
|
15 |
|
16 |
The models that were used in this merger were as follow:
|
17 |
|
|
|
9 |
|
10 |
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/ECrHQnZnv8UM9GUCQtlWW.jpeg)
|
11 |
|
12 |
+
EveryoneLLM series of models are a new Mixtral type model created using experts that were finetuned by the community, for the community. This is the first model to release in the series and it is a coding specific model. EveryoneLLM, which will be a more generalized model, will be released in the near future after more work is done to fine tune the process of merging Mistral models into a larger Mixtral models greater with success.
|
13 |
|
14 |
+
The goal of the EveryoneLLM series of models is to be a replacement or an alternative to Mixtral-8x7b that is more suitable for general and specific use, as well as easier to fine tune. Since Mistralai is being secretive about the "secret sause" that makes Mixtral-Instruct such an effective fine tune of the Mixtral-base model, I've decided its time for the community to directly compete with Mistralai on our own.
|
15 |
|
16 |
The models that were used in this merger were as follow:
|
17 |
|