merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the della merge method using Sorawiz/KunouSky-32B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Sorawiz/KunouSky-32B
  - model: Sao10K/32B-Qwen2.5-Kunou-v1
    parameters:
      density: 1
      weight: 0.5
  - model: allura-org/Qwen2.5-32b-RP-Ink
    parameters:
      density: 0.7
      weight: 0.5
  - model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
    parameters:
      density: 0.7
      weight: 0.5
  - model: ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3+pipihand01/QwQ-32B-Preview-abliterated-lora-rank32
    parameters:
      density: 0.5
      weight: 0.5
  - model: AiCloser/Qwen2.5-32B-AGI+pipihand01/QwQ-32B-Preview-abliterated-lora-rank32
    parameters:
      density: 0.5
      weight: 0.3
  - model: rombodawg/Rombos-LLM-V2.5-Qwen-32b+pipihand01/QwQ-32B-Preview-abliterated-lora-rank32
    parameters:
      density: 0.5
      weight: 0.3
  - model: Daemontatox/Cogito-Ultima+pipihand01/QwQ-32B-Preview-abliterated-lora-rank32
    parameters:
      density: 0.3
      weight: 0.3
merge_method: della
base_model: Sorawiz/KunouSky-32B
parameters:
  normalize: true
  int8_mask: true
  lambda: 1.0
  epsilon: 0.1
dtype: bfloat16
Downloads last month
13
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Sorawiz/CrossCreative-32Bv0