merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: 01-ai/Yi-1.5-34B-Chat
layer_range:
- 0
- 60
- model: CombinHorizon/YiSM-blossom5.1-34B-SLERP
layer_range:
- 0
- 60
merge_method: slerp
base_model: 01-ai/Yi-1.5-34B-Chat
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.38
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 30.10 |
IFEval (0-Shot) | 39.93 |
BBH (3-Shot) | 47.20 |
MATH Lvl 5 (4-Shot) | 21.00 |
GPQA (0-shot) | 15.21 |
MuSR (0-shot) | 15.85 |
MMLU-PRO (5-shot) | 41.38 |
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for allknowingroger/Yislerp2-34B
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard39.930
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard47.200
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard21.000
- acc_norm on GPQA (0-shot)Open LLM Leaderboard15.210
- acc_norm on MuSR (0-shot)Open LLM Leaderboard15.850
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard41.380