tomasmcm's picture
Update README.md
d336471 verified
|
raw
history blame
1.95 kB
metadata
base_model:
  - Qwen/Qwen2.5-Coder-32B-Instruct
  - NovaSky-AI/Sky-T1-32B-Flash
  - Qwen/Qwen2.5-Coder-32B
library_name: transformers
tags:
  - mergekit
  - merge
  - qwen2
license: apache-2.0

tomasmcm/sky-t1-coder-32b-flash

This is a merge of pre-trained language models created using mergekit.

I wanted to see if it would be possible to improve on FuseO1-DeepSeekR1-Qwen2.5-Coder-32B-Preview and CoderO1-DeepSeekR1-Coder-32B-Preview by using Sky-T1-32B-Flash as the reasoning model that is merged with Qwen2.5-Coder-32B-Instruct instead of DeepSeek-R1-Distill-Qwen-32B. The idea is to have a strong coder model that can reason but without very long reasoning chains (hence using the Flash model).

GGUF files available at mradermacher/sky-t1-coder-32b-flash-GGUF (thank you!)

Merge Details

Merge Method

This model was merged using the SCE merge method using Qwen/Qwen2.5-Coder-32B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  # Pivot model
  - model: Qwen/Qwen2.5-Coder-32B
  # Target models
  - model: Qwen/Qwen2.5-Coder-32B-Instruct
  - model: NovaSky-AI/Sky-T1-32B-Flash
merge_method: sce
base_model: Qwen/Qwen2.5-Coder-32B
parameters:
  select_topk: 1.0
dtype: bfloat16