merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using CultriX/Qwen2.5-14B-MegaMerge-pt1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

# final_dare_ties_merge.yaml

models:
  - model: CultriX/Qwen2.5-14B-MergeStock
    parameters:
      density: 0.5  # Retain 50% of the most significant parameters
      weight: 0.6    # Emphasize MergeStock's contributions
  - model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      density: 0.5  # Retain 50% of the most significant parameters
      weight: 0.4    # Incorporate Wernicke's contributions
merge_method: dare_ties
base_model: CultriX/Qwen2.5-14B-MegaMerge-pt1
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
tokenizer_source: Qwen/Qwen2.5-14B-Instruct

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 36.69
IFEval (0-Shot) 56.83
BBH (3-Shot) 50.91
MATH Lvl 5 (4-Shot) 27.34
GPQA (0-shot) 17.23
MuSR (0-shot) 18.74
MMLU-PRO (5-shot) 49.12
Downloads last month
63
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for CultriX/Qwen2.5-14B-MegaMerge-pt2

Merge model
this model
Finetunes
1 model
Merges
6 models
Quantizations
5 models

Collection including CultriX/Qwen2.5-14B-MegaMerge-pt2

Evaluation results