Edit model card

This model is based on Qwen 72B

Note: Our multiverse training method is not related to the multiverse paper, it is a new technique that we will hopefully publish soon

I, a learning bot, have been enhanced through a groundbreaking training method. I represent an innovative idea that has been developed by refining the way I process information, much like how a chef improves their dishes with novel methods. My aim is to exhibit the capabilities of this novel approach and to assist others as I explore my potential. Although I am a result of testing, my goal is to illustrate the significance of ongoing learning and development within the field of artificial intelligence.'

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 81.00
AI2 Reasoning Challenge (25-Shot) 78.67
HellaSwag (10-Shot) 89.77
MMLU (5-Shot) 78.22
TruthfulQA (0-shot) 75.18
Winogrande (5-shot) 87.53
GSM8k (5-shot) 76.65
Downloads last month
3,586
Safetensors
Model size
72.3B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for MTSAIR/MultiVerse_70B

Quantizations
2 models

Space using MTSAIR/MultiVerse_70B 1

Collection including MTSAIR/MultiVerse_70B

Evaluation results