File size: 3,090 Bytes
4cb71a1
 
47132bd
 
 
 
4cb71a1
47132bd
4cb71a1
 
 
 
 
 
47132bd
 
 
 
 
 
 
 
 
 
f0ab9f2
47132bd
 
 
f0ab9f2
47132bd
 
 
 
4cb71a1
f0ab9f2
4cb71a1
 
47132bd
4cb71a1
47132bd
4cb71a1
47132bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4cb71a1
47132bd
4cb71a1
47132bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4cb71a1
47132bd
4cb71a1
 
 
 
 
 
 
 
47132bd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
base_model:
- unsloth/Llama-3.3-70B-Instruct
- pankajmathur/orca_mini_v9_3_70B
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- Undi95/Sushi-v1.4
- Sao10K/70B-L3.3-Cirrus-x1
- Nohobby/L3.3-Prikol-70B-v0.1a
library_name: transformers
tags:
- mergekit
- merge

---
# Prikol

> I don't even know anymore

![Меня нужно изолировать от общества](https://files.catbox.moe/x9t3zo.png)

### Overview

A merge of some Llama 3.3 models because um uh yeah

Went extra schizo on the recipe, hoping for an extra fun result, and... Well, I guess it's an overall improvement over the previous revision. It's a tiny bit smarter, has even more distinct swipes and nice dialogues, but for some reason it's damn sloppy. 

I've published the second step of this merge as a separate model, and I'd say the results are more interesting, but not as usable as this one. https://huggingface.co/Nohobby/AbominationSnowPig

Prompt format: Llama3 OR Llama3 Context and ChatML Instruct. It actually works a bit better this way

Samplers: [This kinda works but I'm weird](https://files.catbox.moe/olsiei.json)

### Quants

[Static](https://huggingface.co/mradermacher/L3.3-Prikol-70B-v0.2-GGUF) | [Imatrix](https://huggingface.co/mradermacher/L3.3-Prikol-70B-v0.2-i1-GGUF)

## Merge Details
### Merging Steps

### Step1

```yaml
models:
  - model: pankajmathur/orca_mini_v9_3_70B
  - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
    parameters:
      weight: 1
      density: 0.55
      gamma: 0.03
  - model: Undi95/Sushi-v1.4
    parameters:
      weight: 0.069
      gamma: 0.001
      density: 0.911
merge_method: breadcrumbs
base_model: pankajmathur/orca_mini_v9_3_70B
parameters:
  int8_mask: true
  rescale: true
  normalize: true
dtype: bfloat16
tokenizer_source: base
```

### Step2 [(AbominationSnowPig)](https://huggingface.co/Nohobby/AbominationSnowPig)

```yaml
dtype: bfloat16
tokenizer_source: base
merge_method: nuslerp
parameters:
  nuslerp_row_wise: true
models:
  - model: unsloth/Llama-3.3-70B-Instruct
    parameters:
      weight:
        - filter: v_proj
          value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
        - filter: o_proj
          value: [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1]
        - filter: up_proj
          value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
        - filter: gate_proj
          value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
        - filter: down_proj
          value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        - value: 0
  - model: Step1
    parameters:
      weight:
        - filter: v_proj
          value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
        - filter: o_proj
          value: [0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0]
        - filter: up_proj
          value: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        - filter: gate_proj
          value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
        - filter: down_proj
          value: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
        - value: 1
```

### Step3

```yaml
base_model: AbominationSnowPig
merge_method: model_stock
dtype: bfloat16
models:
  - model: Sao10K/70B-L3.3-Cirrus-x1
  - model: Nohobby/L3.3-Prikol-70B-v0.1a
```