This is a test run for a future 70b parameter models moe model. I took WizardLM/WizardLM-70B-V1.0 and migtissera/Synthia-70B as two base models and created the discriminator prompts to push technical, logic, and math type questions to the Wizard side and then all creative or conversation questions to the Synthia side. Now that this is working for me I am going to move to fine tuning models for more specific tasks. This model takes about 240GB of VRAM for full resolution inference. As far as I know, it is the first 125B parameter moe model publicly available. I plan on making more and sharing of course.
Hopefully I can add more info on this model, it loads perfectly for me and responds nicely. It might take me a bit since I want to make "Cerberus" with the fine tuned models and get it released. But enjoy this one, llama2 model.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 69.58 |
AI2 Reasoning Challenge (25-Shot) | 67.66 |
HellaSwag (10-Shot) | 85.52 |
MMLU (5-Shot) | 68.94 |
TruthfulQA (0-shot) | 56.27 |
Winogrande (5-shot) | 82.32 |
GSM8k (5-shot) | 56.79 |
- Downloads last month
- 79
Model tree for ibivibiv/orthorus-125b-moe
Collection including ibivibiv/orthorus-125b-moe
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard67.660
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard85.520
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard68.940
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard56.270
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard82.320
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard56.790