Update README.md
Browse files
README.md
CHANGED
@@ -21,9 +21,7 @@ Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities o
|
|
21 |
| 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
|
22 |
## Training Data
|
23 |
(WEBINSTRUCT) Coming soon...
|
24 |
-
|
25 |
-
<img src="webinstruct.png" width="150px" style="display: inline-block;">
|
26 |
-
</div>
|
27 |
|
28 |
## Training Procedure
|
29 |
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
|
|
|
21 |
| 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
|
22 |
## Training Data
|
23 |
(WEBINSTRUCT) Coming soon...
|
24 |
+
![Project Framework](webinstruct.png)
|
|
|
|
|
25 |
|
26 |
## Training Procedure
|
27 |
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
|