Triangle104's picture
Adding Evaluation Results (#1)
561bd21 verified
---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
- unsloth/DeepSeek-R1-Distill-Llama-8B
- nbeerbower/Llama3.1-Allades-8B
model-index:
- name: Distilled-Whiskey-8b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 34.48
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-Whiskey-8b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 29.32
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-Whiskey-8b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 21.53
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-Whiskey-8b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 10.85
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-Whiskey-8b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.22
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-Whiskey-8b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.3
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-Whiskey-8b
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B) as a base.
### Models Merged
The following models were included in the merge:
* [huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated)
* [nbeerbower/Llama3.1-Allades-8B](https://huggingface.co/nbeerbower/Llama3.1-Allades-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: unsloth/DeepSeek-R1-Distill-Llama-8B
#no parameters necessary for base model
- model: huihui-ai/DeepSeek-R1-Distill-Llama-8B-abliterated
parameters:
density: 0.4
weight: 0.5
- model: nbeerbower/Llama3.1-Allades-8B
parameters:
density: 0.5
weight: 0.6
merge_method: ties
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B
parameters:
normalize: false
int8_mask: true
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Triangle104__Distilled-Whiskey-8b-details)
| Metric |Value|
|-------------------|----:|
|Avg. |22.28|
|IFEval (0-Shot) |34.48|
|BBH (3-Shot) |29.32|
|MATH Lvl 5 (4-Shot)|21.53|
|GPQA (0-shot) |10.85|
|MuSR (0-shot) |11.22|
|MMLU-PRO (5-shot) |26.30|