merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft + Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-dpo-lora as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method:     model_stock
base_model:       Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft+Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-dpo-lora
tokenizer_source: base
dtype:            bfloat16
parameters:
  int8_mask:      true
models:
  - model:        Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.1
  - model:        Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft
  - model:        Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-sft+Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.2-fusechat-dpo-lora
  - model:        wanlige/li-14b-v0.4
  - model:        Cran-May/tempmotacilla-cinerea-0308
Downloads last month
0
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.3

Datasets used to train Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v9.3