Casual-Autopsy commited on
Commit
7410758
1 Parent(s): 8840285

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +306 -18
README.md CHANGED
@@ -11,11 +11,8 @@ tags:
11
  - rp
12
  - roleplay
13
  - role-play
14
- - chain-of-thoughts
15
  - summarization
16
  - emotion classification
17
- - biology
18
- - psychology
19
  base_model:
20
  - nothingiisreal/L3-8B-Celeste-v1
21
  - Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
@@ -33,6 +30,14 @@ base_model:
33
  - OEvortex/Emotional-llama-8B
34
  - lighteternal/Llama3-merge-biomed-8b
35
  - Casual-Autopsy/Llama3-merge-psychotherapy-8b
 
 
 
 
 
 
 
 
36
  ---
37
  | <img src="https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B/resolve/main/Card-Assets/NovaKid-Girl.jpeg" width="50%" height="50%" style="display: block; margin: auto;"> |
38
  |:---:|
@@ -45,10 +50,14 @@ base_model:
45
  ***
46
  ***
47
  ## Presets
48
- I've(or anyone else) yet to find good Textgen Preset so here's the starting point preset I use instead, It should get you by for now.
 
 
 
 
49
  ```yaml
50
- Top K: 50
51
- Top P: 0.85
52
  Repetition Penalty: 1.01
53
  # Don't make this higher, DRY handles the bulk of Squashing Repetition.
54
  # This is justs to lightly nudge the bot to move the plot forward
@@ -68,6 +77,10 @@ Dynamic Temperature:
68
  Exponent: 0.85
69
  ```
70
 
 
 
 
 
71
  ***
72
  ***
73
  ## Usage Info
@@ -110,6 +123,14 @@ The following models were used to make this merge:
110
  * [OEvortex/Emotional-llama-8B](https://huggingface.co/OEvortex/Emotional-llama-8B)
111
  * [lighteternal/Llama3-merge-biomed-8b](https://huggingface.co/lighteternal/Llama3-merge-biomed-8b)
112
  * [Casual-Autopsy/Llama3-merge-psychotherapy-8b](https://huggingface.co/Casual-Autopsy/Llama3-merge-psychotherapy-8b)
 
 
 
 
 
 
 
 
113
 
114
  ***
115
  ***
@@ -118,8 +139,6 @@ The following models were used to make this merge:
118
  ***
119
  ### [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
120
 
121
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Casual-Autopsy__L3-Umbral-Mind-RP-v2.0-8B)
122
-
123
  **Explaination for AI RP newbies:** IFEval is the most important evaluation for RP AIs as it determines how well it can follow OOC, Lorebooks, and most importantly character cards.
124
  The rest don't matter. At least not nearly as much as IFEval.
125
 
@@ -158,42 +177,167 @@ The following YAML configs were used to make this merge.
158
  ### Super-Nova-CRE_pt.1
159
 
160
  ```yaml
161
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
162
  ```
163
 
164
  ***
165
  ### Super-Nova-CRE_pt.2
166
 
167
  ```yaml
168
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
169
  ```
170
 
171
  ***
172
  ### Super-Nova-UNC_pt.1
173
 
174
  ```yaml
175
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
176
  ```
177
 
178
  ***
179
  ### Super-Nova-UNC_pt.2
180
 
181
  ```yaml
182
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
183
  ```
184
 
185
  ***
186
  ### Super-Nova-INT_pt.1
187
 
188
  ```yaml
189
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
190
  ```
191
 
192
  ***
193
  ### Super-Nova-INT_pt.2
194
 
195
  ```yaml
196
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
197
  ```
198
 
199
  ***
@@ -201,39 +345,183 @@ The following YAML configs were used to make this merge.
201
 
202
  ```yaml
203
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
204
  ```
205
 
206
  ***
207
  ### Super-Nova-UNC
208
 
209
  ```yaml
210
-
 
 
 
 
 
 
 
 
 
 
211
  ```
212
 
213
  ***
214
  ### Super-Nova-INT
215
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
216
  ```yaml
217
 
 
 
 
 
 
 
 
 
 
 
 
 
218
  ```
219
 
220
  ***
221
- ### Super-Nova-RP_pt.1
222
 
223
  ```yaml
 
 
 
 
 
 
 
 
 
 
 
 
224
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
225
  ```
226
 
227
  ***
228
  ### Super-Nova-RP_pt.2
229
 
230
  ```yaml
231
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
232
  ```
233
 
234
  ***
235
  ### L3-Super-Nova-RP-8B
236
 
237
  ```yaml
238
-
 
 
 
 
 
 
 
 
 
239
  ```
 
11
  - rp
12
  - roleplay
13
  - role-play
 
14
  - summarization
15
  - emotion classification
 
 
16
  base_model:
17
  - nothingiisreal/L3-8B-Celeste-v1
18
  - Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
 
30
  - OEvortex/Emotional-llama-8B
31
  - lighteternal/Llama3-merge-biomed-8b
32
  - Casual-Autopsy/Llama3-merge-psychotherapy-8b
33
+ - Sao10K/L3-8B-Tamamo-v1
34
+ - ResplendentAI/Nymph_8B
35
+ - ChaoticNeutrals/T-900-8B
36
+ - Sao10K/L3-8B-Niitama-v1
37
+ - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
38
+ - Hastagaras/Jamet-8B-L3-MK.V-Blackroot
39
+ - Hastagaras/Halu-8B-Llama3-Blackroot
40
+ - crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
41
  ---
42
  | <img src="https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B/resolve/main/Card-Assets/NovaKid-Girl.jpeg" width="50%" height="50%" style="display: block; margin: auto;"> |
43
  |:---:|
 
50
  ***
51
  ***
52
  ## Presets
53
+
54
+ ***
55
+ ### Text Gen
56
+ The Current good staring preset for this model. **Subject to change.**
57
+ **Settings by yours truly**
58
  ```yaml
59
+ Top K: 40
60
+ Min P: 0.075
61
  Repetition Penalty: 1.01
62
  # Don't make this higher, DRY handles the bulk of Squashing Repetition.
63
  # This is justs to lightly nudge the bot to move the plot forward
 
77
  Exponent: 0.85
78
  ```
79
 
80
+ ***
81
+ ### Context/Instruct
82
+ [Virt-io's SillyTavern](https://huggingface.co/Virt-io/SillyTavern-Presets) Presets work really well with this.
83
+
84
  ***
85
  ***
86
  ## Usage Info
 
123
  * [OEvortex/Emotional-llama-8B](https://huggingface.co/OEvortex/Emotional-llama-8B)
124
  * [lighteternal/Llama3-merge-biomed-8b](https://huggingface.co/lighteternal/Llama3-merge-biomed-8b)
125
  * [Casual-Autopsy/Llama3-merge-psychotherapy-8b](https://huggingface.co/Casual-Autopsy/Llama3-merge-psychotherapy-8b)
126
+ * [Sao10K/L3-8B-Tamamo-v1](https://huggingface.co/Sao10K/L3-8B-Tamamo-v1)
127
+ * [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B)
128
+ * [ChaoticNeutrals/T-900-8B](https://huggingface.co/ChaoticNeutrals/T-900-8B)
129
+ * [Sao10K/L3-8B-Niitama-v1](https://huggingface.co/Sao10K/L3-8B-Niitama-v1)
130
+ * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
131
+ * [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot)
132
+ * [Hastagaras/Halu-8B-Llama3-Blackroot](https://huggingface.co/Hastagaras/Halu-8B-Llama3-Blackroot)
133
+ * [crestf411/L3-8B-sunfall-v0.4-stheno-v3.2](https://huggingface.co/crestf411/L3-8B-sunfall-v0.4-stheno-v3.2)
134
 
135
  ***
136
  ***
 
139
  ***
140
  ### [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
141
 
 
 
142
  **Explaination for AI RP newbies:** IFEval is the most important evaluation for RP AIs as it determines how well it can follow OOC, Lorebooks, and most importantly character cards.
143
  The rest don't matter. At least not nearly as much as IFEval.
144
 
 
177
  ### Super-Nova-CRE_pt.1
178
 
179
  ```yaml
180
+ models:
181
+ - model: nothingiisreal/L3-8B-Celeste-v1
182
+ - model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
183
+ parameters:
184
+ density: [0.35, 0.45, 0.5, 0.55, 0.65, 0.55, 0.5, 0.45, 0.35]
185
+ weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
186
+ - model: Sao10K/L3-8B-Stheno-v3.2
187
+ parameters:
188
+ density: [0.65, 0.55, 0.5, 0.45, 0.35, 0.45, 0.5, 0.55, 0.65]
189
+ weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165]
190
+ merge_method: dare_ties
191
+ base_model: nothingiisreal/L3-8B-Celeste-v1
192
+ parameters:
193
+ normalize: false
194
+ int8_mask: true
195
+ dtype: float32
196
+ out_dtype: bfloat16
197
  ```
198
 
199
  ***
200
  ### Super-Nova-CRE_pt.2
201
 
202
  ```yaml
203
+ models:
204
+ - model: nothingiisreal/L3-8B-Celeste-v1
205
+ - model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
206
+ parameters:
207
+ density: [0.35, 0.45, 0.5, 0.55, 0.65, 0.55, 0.5, 0.45, 0.35]
208
+ weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165]
209
+ - model: Sao10K/L3-8B-Lunaris-v1
210
+ parameters:
211
+ density: [0.65, 0.55, 0.5, 0.45, 0.35, 0.45, 0.5, 0.55, 0.65]
212
+ weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
213
+ merge_method: dare_ties
214
+ base_model: nothingiisreal/L3-8B-Celeste-v1
215
+ parameters:
216
+ normalize: false
217
+ int8_mask: true
218
+ dtype: float32
219
+ out_dtype: bfloat16
220
  ```
221
 
222
  ***
223
  ### Super-Nova-UNC_pt.1
224
 
225
  ```yaml
226
+ models:
227
+ - model: turboderp/llama3-turbcat-instruct-8b
228
+ - model: ChaoticNeutrals/Domain-Fusion-L3-8B
229
+ parameters:
230
+ density: 0.5
231
+ weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
232
+ - model: migtissera/Llama-3-8B-Synthia-v3.5
233
+ parameters:
234
+ density: 0.5
235
+ weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165]
236
+ merge_method: dare_ties
237
+ base_model: turboderp/llama3-turbcat-instruct-8b
238
+ parameters:
239
+ normalize: false
240
+ int8_mask: true
241
+ dtype: float32
242
+ out_dtype: bfloat16
243
  ```
244
 
245
  ***
246
  ### Super-Nova-UNC_pt.2
247
 
248
  ```yaml
249
+ models:
250
+ - model: turboderp/llama3-turbcat-instruct-8b
251
+ - model: TheDrummer/Llama-3SOME-8B-v2
252
+ parameters:
253
+ density: 0.5
254
+ weight: [0.165, 0.495, 0.495, 0.165, 0.165, 0.495, 0.495, 0.165]
255
+ - model: ChaoticNeutrals/Hathor_RP-v.01-L3-8B
256
+ parameters:
257
+ density: 0.5
258
+ weight: [0.495, 0.165, 0.165, 0.495, 0.495, 0.165, 0.165, 0.495]
259
+ merge_method: dare_ties
260
+ base_model: turboderp/llama3-turbcat-instruct-8b
261
+ parameters:
262
+ normalize: false
263
+ int8_mask: true
264
+ dtype: float32
265
+ out_dtype: bfloat16
266
  ```
267
 
268
  ***
269
  ### Super-Nova-INT_pt.1
270
 
271
  ```yaml
272
+ models:
273
+ - model: TheSkullery/llama-3-cat-8b-instruct-v1
274
+ - model: FPHam/L3-8B-Everything-COT
275
+ parameters:
276
+ density: 0.5
277
+ weight: [0.139, 0.139, 0.208, 0.139, 0.208]
278
+ - model: Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged
279
+ parameters:
280
+ density: 0.5
281
+ weight: [0.139, 0.208, 0.139, 0.208, 0.139]
282
+ - model: OEvortex/Emotional-llama-8B
283
+ parameters:
284
+ density: 0.5
285
+ weight: [0.208, 0.139, 0.208, 0.139, 0.139]
286
+ - model: lighteternal/Llama3-merge-biomed-8b
287
+ parameters:
288
+ density: 0.5
289
+ weight: [0.208, 0.139, 0.139, 0.139, 0.208]
290
+ - model: Casual-Autopsy/Llama3-merge-psychotherapy-8b
291
+ parameters:
292
+ density: 0.5
293
+ weight: [0.139, 0.208, 0.139, 0.208, 0.139]
294
+ merge_method: ties
295
+ base_model: TheSkullery/llama-3-cat-8b-instruct-v1
296
+ parameters:
297
+ normalize: false
298
+ int8_mask: true
299
+ dtype: float32
300
+ out_dtype: bfloat16
301
  ```
302
 
303
  ***
304
  ### Super-Nova-INT_pt.2
305
 
306
  ```yaml
307
+ models:
308
+ - model: TheSkullery/llama-3-cat-8b-instruct-v1
309
+ - model: FPHam/L3-8B-Everything-COT
310
+ parameters:
311
+ density: 0.9
312
+ gamma: 0.01
313
+ weight: [0.139, 0.208, 0.208, 0.139, 0.139]
314
+ - model: Ayush-1722/Meta-Llama-3-8B-Instruct-Summarize-v0.2-24K-LoRANET-Merged
315
+ parameters:
316
+ density: 0.9
317
+ gamma: 0.01
318
+ weight: [0.208, 0.139, 0.139, 0.139, 0.208]
319
+ - model: OEvortex/Emotional-llama-8B
320
+ parameters:
321
+ density: 0.9
322
+ gamma: 0.01
323
+ weight: [0.139, 0.139, 0.208, 0.208, 0.139]
324
+ - model: lighteternal/Llama3-merge-biomed-8b
325
+ parameters:
326
+ density: 0.9
327
+ gamma: 0.01
328
+ weight: [0.139, 0.208, 0.139, 0.208, 0.139]
329
+ - model: Casual-Autopsy/Llama3-merge-psychotherapy-8b
330
+ parameters:
331
+ density: 0.9
332
+ gamma: 0.01
333
+ weight: [0.208, 0.139, 0.139, 0.139, 0.208]
334
+ merge_method: breadcrumbs_ties
335
+ base_model: TheSkullery/llama-3-cat-8b-instruct-v1
336
+ parameters:
337
+ normalize: false
338
+ int8_mask: true
339
+ dtype: float32
340
+ out_dtype: bfloat16
341
  ```
342
 
343
  ***
 
345
 
346
  ```yaml
347
 
348
+ models:
349
+ - model: Casual-Autopsy/Super-Nova-CRE_pt.1
350
+ - model: Casual-Autopsy/Super-Nova-CRE_pt.2
351
+ merge_method: slerp
352
+ base_model: Casual-Autopsy/Super-Nova-CRE_pt.1
353
+ parameters:
354
+ t:
355
+ - filter: self_attn
356
+ value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
357
+ - filter: mlp
358
+ value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
359
+ - value: 0.5
360
+ embed_slerp: true
361
+ dtype: float32
362
+ out_dtype: bfloat16
363
  ```
364
 
365
  ***
366
  ### Super-Nova-UNC
367
 
368
  ```yaml
369
+ models:
370
+ - model: Casual-Autopsy/Super-Nova-UNC_pt.1
371
+ - model: Casual-Autopsy/Super-Nova-UNC_pt.2
372
+ merge_method: slerp
373
+ base_model: Casual-Autopsy/Super-Nova-UNC_pt.1
374
+ parameters:
375
+ t:
376
+ - value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
377
+ embed_slerp: true
378
+ dtype: float32
379
+ out_dtype: bfloat16
380
  ```
381
 
382
  ***
383
  ### Super-Nova-INT
384
 
385
+ ```yaml
386
+ models:
387
+ - model: Casual-Autopsy/Super-Nova-INT_pt.1
388
+ - model: Casual-Autopsy/Super-Nova-INT_pt.2
389
+ merge_method: slerp
390
+ base_model: Casual-Autopsy/Super-Nova-INT_pt.1
391
+ parameters:
392
+ t:
393
+ - value: 0.5
394
+ embed_slerp: true
395
+ dtype: float32
396
+ out_dtype: bfloat16
397
+ ```
398
+
399
+ ***
400
+ ### Super-Nova-RP_stp.1
401
+
402
  ```yaml
403
 
404
+ models:
405
+ - model: Casual-Autopsy/Super-Nova-CRE
406
+ - model: asual-Autopsy/Super-Nova-UNC
407
+ merge_method: slerp
408
+ base_model: Casual-Autopsy/Super-Nova-CRE
409
+ parameters:
410
+ t:
411
+ - value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
412
+ embed_slerp: true
413
+ dtype: float32
414
+ out_dtype: bfloat16
415
+
416
  ```
417
 
418
  ***
419
+ ### Super-Nova-RP_stp.2
420
 
421
  ```yaml
422
+ models:
423
+ - model: Casual-Autopsy/Super-Nova-RP_stp.1
424
+ - model: Casual-Autopsy/Super-Nova-INT
425
+ merge_method: slerp
426
+ base_model: Casual-Autopsy/Super-Nova-RP_stp.1
427
+ parameters:
428
+ t:
429
+ - value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
430
+ embed_slerp: true
431
+ dtype: float32
432
+ out_dtype: bfloat16
433
+ ```
434
 
435
+ ***
436
+ ### Super-Nova-RP_pt.1
437
+
438
+ ```yaml
439
+ models:
440
+ - model: Casual-Autopsy/Super-Nova-RP_stp.2
441
+ - model: Sao10K/L3-8B-Tamamo-v1
442
+ parameters:
443
+ density: [0.4, 0.6, 0.5, 0.6, 0.4]
444
+ epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
445
+ lambda: 0.85
446
+ weight: [-0.01523, 0.01768, -0.01384, 0.01835, -0.01247]
447
+ - model: ResplendentAI/Nymph_8B
448
+ parameters:
449
+ density: [0.65, 0.35, 0.5, 0.35, 0.65]
450
+ epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
451
+ lambda: 0.85
452
+ weight: [0.01823, -0.01647, 0.01422, -0.01975, 0.01128]
453
+ - model: ChaoticNeutrals/T-900-8B
454
+ parameters:
455
+ density: [0.35, 0.65, 0.5, 0.65, 0.35]
456
+ epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
457
+ lambda: 0.85
458
+ weight: [-0.01891, 0.01554, -0.01325, 0.01791, -0.01458]
459
+ - model: Sao10K/L3-8B-Niitama-v1
460
+ parameters:
461
+ density: [0.6, 0.4, 0.5, 0.4, 0.6]
462
+ epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
463
+ lambda: 0.85
464
+ weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
465
+ merge_method: della
466
+ base_model: Casual-Autopsy/Super-Nova-RP_stp.2
467
+ parameters:
468
+ normalize: false
469
+ int8_mask: true
470
+ dtype: float32
471
+ out_dtype: bfloat16
472
  ```
473
 
474
  ***
475
  ### Super-Nova-RP_pt.2
476
 
477
  ```yaml
478
+ models:
479
+ - model: Casual-Autopsy/Super-Nova-RP_stp.2
480
+ - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
481
+ parameters:
482
+ density: [0.4, 0.6, 0.5, 0.6, 0.4]
483
+ epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
484
+ lambda: 0.85
485
+ weight: [-0.01935, 0.01785, -0.01512, 0.01809, -0.01371]
486
+ - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
487
+ parameters:
488
+ density: [0.65, 0.35, 0.5, 0.35, 0.65]
489
+ epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
490
+ lambda: 0.85
491
+ weight: [0.01847, -0.01468, 0.01503, -0.01822, 0.01459]
492
+ - model: Hastagaras/Halu-8B-Llama3-Blackroot
493
+ parameters:
494
+ density: [0.35, 0.65, 0.5, 0.65, 0.35]
495
+ epsilon: [0.1, 0.1, 0.25, 0.1, 0.1]
496
+ lambda: 0.85
497
+ weight: [-0.01578, 0.01821, -0.01753, 0.01677, -0.01442]
498
+ - model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
499
+ parameters:
500
+ density: [0.6, 0.5, 0.5, 0.5, 0.6]
501
+ epsilon: [0.15, 0.15, 0.25, 0.15, 0.15]
502
+ lambda: 0.85
503
+ weight: [0.01667, -0.01740, 0.01560, -0.01564, 0.01315]
504
+ merge_method: della
505
+ base_model: Casual-Autopsy/Super-Nova-RP_stp.2
506
+ parameters:
507
+ normalize: false
508
+ int8_mask: true
509
+ dtype: float32
510
+ out_dtype: bfloat16
511
  ```
512
 
513
  ***
514
  ### L3-Super-Nova-RP-8B
515
 
516
  ```yaml
517
+ models:
518
+ - model: Casual-Autopsy/Super-Nova-RP_stp.2
519
+ - model: /kaggle/input/super-nova-rp_pt.4/transformers/hf/1
520
+ merge_method: slerp
521
+ base_model: /kaggle/input/super-nova-rp_pt.3/transformers/hf/1
522
+ parameters:
523
+ t:
524
+ - value: 0.5
525
+ dtype: float32
526
+ out_dtype: bfloat16
527
  ```