File size: 1,319 Bytes
dabbc20
 
 
 
 
 
 
 
 
68e4a9a
dabbc20
e6ecc29
dabbc20
 
 
b063bb0
68e4a9a
dabbc20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68e4a9a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
base_model:
- NovaSky-AI/Sky-T1-mini
- Rombo-Org/Rombo-LLM-V2.5-Qwen-7b
- Qwen/Qwen2.5-7B-Instruct-1M
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

Test: Seems to make some spelling mistakes, then ask itself about system prompts.

### Merge Method

This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M) as a base.

### Models Merged

The following models were included in the merge:
* [NovaSky-AI/Sky-T1-mini](https://huggingface.co/NovaSky-AI/Sky-T1-mini)
* [Rombo-Org/Rombo-LLM-V2.5-Qwen-7b](https://huggingface.co/Rombo-Org/Rombo-LLM-V2.5-Qwen-7b)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: Qwen/Qwen2.5-7B-Instruct-1M
    #no parameters necessary for base model
  - model: Rombo-Org/Rombo-LLM-V2.5-Qwen-7b
    parameters:
      density: 0.5
      weight: 0.5
  - model: NovaSky-AI/Sky-T1-mini
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Qwen/Qwen2.5-7B-Instruct-1M
parameters:
  normalize: false
  int8_mask: true
dtype: float16
```