---
base_model:
- Kaoeiri/MS_fujin-2409-22B
- Kaoeiri/MS-Physician-2409-22B
- Kaoeiri/MS_dampf-2409-22B
- Kaoeiri/MS_springydragon-2409-22B
- unsloth/Mistral-Small-Instruct-2409
library_name: transformers
tags:
- mergekit
- merge

---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Mistral-Small-Instruct-2409](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) as a base.

### Models Merged

The following models were included in the merge:
* [Kaoeiri/MS_fujin-2409-22B](https://huggingface.co/Kaoeiri/MS_fujin-2409-22B)
* [Kaoeiri/MS-Physician-2409-22B](https://huggingface.co/Kaoeiri/MS-Physician-2409-22B)
* [Kaoeiri/MS_dampf-2409-22B](https://huggingface.co/Kaoeiri/MS_dampf-2409-22B)
* [Kaoeiri/MS_springydragon-2409-22B](https://huggingface.co/Kaoeiri/MS_springydragon-2409-22B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
 - model: Kaoeiri/MS-Physician-2409-22B
   parameters:
      weight: 0.42
      density: 1
 - model: Kaoeiri/MS_springydragon-2409-22B
   parameters:
      weight: 0.7
      density: 0.85
 - model: Kaoeiri/MS_dampf-2409-22B
   parameters:
      weight: 0.8
      density: 0.85
 - model: Kaoeiri/MS_fujin-2409-22B
   parameters:
      weight: 0.7
      density: 0.85

merge_method: dare_ties
base_model: unsloth/Mistral-Small-Instruct-2409
parameters:
  density: 0.85
  epsilon: 0.07
  lambda: 1.25
  normalize: false

dtype: bfloat16
tokenizer_source: union
```