img

This is a test run for a future 70b parameter models moe model. I took WizardLM/WizardLM-70B-V1.0 and migtissera/Synthia-70B as two base models and created the discriminator prompts to push technical, logic, and math type questions to the Wizard side and then all creative or conversation questions to the Synthia side. Now that this is working for me I am going to move to fine tuning models for more specific tasks. This model takes about 240GB of VRAM for full resolution inference. As far as I know, it is the first 125B parameter moe model publicly available. I plan on making more and sharing of course.

Hopefully I can add more info on this model, it loads perfectly for me and responds nicely. It might take me a bit since I want to make "Cerberus" with the fine tuned models and get it released. But enjoy this one, llama2 model.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.58
AI2 Reasoning Challenge (25-Shot) 67.66
HellaSwag (10-Shot) 85.52
MMLU (5-Shot) 68.94
TruthfulQA (0-shot) 56.27
Winogrande (5-shot) 82.32
GSM8k (5-shot) 56.79
Downloads last month
79
Safetensors
Model size
125B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ibivibiv/orthorus-125b-moe

Quantizations
2 models

Collection including ibivibiv/orthorus-125b-moe

Evaluation results