ZEUS 8B V30
This model is a merge of the following pre-trained and finetuned LLMs, created using mergekit.
- (base) T145/KRONOS-8B-V1-P1
- arcee-ai/Llama-3.1-SuperNova-Lite
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- unsloth/Llama-3.1-Storm-8B
- VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
Merge Configuration
The following YAML configuration was used to produce this model:
base_model: T145/KRONOS-8B-V1-P1
dtype: bfloat16
merge_method: dare_ties
name: ZEUS-8B-V30
parameters:
int8_mask: 1.0
normalize: 1.0
random_seed: 145
slices:
- sources:
- layer_range: [0, 32]
model: unsloth/Llama-3.1-Storm-8B
parameters:
density: 0.94
weight: 0.35
- layer_range: [0, 32]
model: arcee-ai/Llama-3.1-SuperNova-Lite
parameters:
density: 0.92
weight: 0.26
- layer_range: [0, 32]
model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
parameters:
density: 0.91
weight: 0.2
- layer_range: [0, 32]
model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
density: 0.93
weight: 0.19
- layer_range: [0, 32]
model: T145/KRONOS-8B-V1-P1
tokenizer:
source: union
tokens:
<|begin_of_text|>:
force: true
source: T145/KRONOS-8B-V1-P1
<|eot_id|>:
force: true
source: T145/KRONOS-8B-V1-P1
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 28.86 |
IFEval (0-Shot) | 74.36 |
BBH (3-Shot) | 32.19 |
MATH Lvl 5 (4-Shot) | 14.43 |
GPQA (0-shot) | 9.40 |
MuSR (0-shot) | 10.07 |
MMLU-PRO (5-shot) | 32.71 |
- Downloads last month
- 18
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for T145/ZEUS-8B-V30
Merge model
this model
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard74.360
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard32.190
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard14.430
- acc_norm on GPQA (0-shot)Open LLM Leaderboard9.400
- acc_norm on MuSR (0-shot)Open LLM Leaderboard10.070
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard32.710