File size: 3,090 Bytes
adb4fc0
 
 
 
 
 
 
 
 
 
 
 
ca99f2e
 
 
 
335021f
ca99f2e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
335021f
ca99f2e
 
 
335021f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ca99f2e
335021f
ca99f2e
 
 
335021f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Nitral-AI/KukulStanta-7B
- AlekseiPravdin/Seamaiiza-7B-v1
---

# KukulStanta-7B-Seamaiiza-7B-v1-slerp-merge

KukulStanta-7B-Seamaiiza-7B-v1-slerp-merge is an advanced language model created through a strategic fusion of two distinct models: [Nitral-AI/KukulStanta-7B](https://huggingface.co/Nitral-AI/KukulStanta-7B) and [AlekseiPravdin/Seamaiiza-7B-v1](https://huggingface.co/AlekseiPravdin/Seamaiiza-7B-v1). The merging process was executed using [mergekit](https://github.com/cg123/mergekit), a specialized tool designed for precise model blending to achieve optimal performance and synergy between the merged architectures.

## 🧩 Merge Configuration

The models were merged using the Spherical Linear Interpolation (SLERP) method, which ensures smooth interpolation between the two models across all layers. The base model chosen for this process was [Nitral-AI/KukulStanta-7B], with parameters and configurations meticulously adjusted to harness the strengths of both source models.

**Configuration:**

```yaml
slices:
  - sources:
      - model: Nitral-AI/KukulStanta-7B
        layer_range: [0, 31]
      - model: AlekseiPravdin/Seamaiiza-7B-v1
        layer_range: [0, 31]
merge_method: slerp
base_model: Nitral-AI/KukulStanta-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float16
```

## Model Features

This fusion model combines the robust generative capabilities of [Nitral-AI/KukulStanta-7B] with the refined tuning of [AlekseiPravdin/Seamaiiza-7B-v1], creating a versatile model suitable for a variety of text generation tasks. Leveraging the strengths of both parent models, KukulStanta-7B-Seamaiiza-7B-v1-slerp-merge provides enhanced context understanding, nuanced text generation, and improved performance across diverse NLP tasks.

## Evaluation Results

### KukulStanta-7B

The evaluation results for [Nitral-AI/KukulStanta-7B](https://huggingface.co/Nitral-AI/KukulStanta-7B) are as follows:

|             Metric              | Value |
|---------------------------------|-------|
| Avg.                             | 70.95 |
| AI2 Reasoning Challenge (25-Shot)| 68.43 |
| HellaSwag (10-Shot)            | 86.37 |
| MMLU (5-Shot)                  | 65.00 |
| TruthfulQA (0-shot)            | 62.19 |
| Winogrande (5-shot)            | 80.03 |
| GSM8k (5-shot)                 | 63.68 |

### Seamaiiza-7B-v1

The evaluation results for [AlekseiPravdin/Seamaiiza-7B-v1](https://huggingface.co/AlekseiPravdin/Seamaiiza-7B-v1) are not provided in detail but are expected to complement the performance metrics of KukulStanta-7B, enhancing its capabilities in various text generation tasks.

## Limitations

While KukulStanta-7B-Seamaiiza-7B-v1-slerp-merge inherits the strengths of both parent models, it may also carry over some limitations or biases present in them. Users should be aware of potential biases in generated content and the need for careful evaluation in sensitive applications.