File size: 1,821 Bytes
c20f944
437fe01
137bf73
437fe01
c20f944
137bf73
c20f944
137bf73
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
base_model: grimjim/kukulemon-spiked-9B
library_name: transformers
quanted_by: grimjim
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# kululemon-spiked-9B-8.0bpw_h8_exl2

This is a frankenmerge of a pre-trained language model created using [mergekit](https://github.com/cg123/mergekit). As an experiment, this appears to be a partial success.

Lightly tested with temperature 1 and minP 0.01 with ChatML prompts; the model supports Alpaca prompts and has 8K context length, a result of its Mistral v0.1 provenance. The model's output has been coherent and stable during aforementioned testing.

The merge formula for this frankenmerge is below. It is conjectured that the shorter first section is not key to variation, the middle segment is key to balancing reasoning and variation, and that the lengthy final section is required for convergence and eventual stability. The internal instability is probably better suited for narrative involving unstable and/or unhinged characters and situations.

- Full weights: [grimjim/kukulemon-spiked-9B](https://huggingface.co/grimjim/kukulemon-spiked-9B)
- GGUF quants: [grimjim/kukulemon-spiked-9B-GGUF](https://huggingface.co/grimjim/kukulemon-spiked-9B-GGUF)

## Merge Details
### Merge Method

This model was merged using the passthrough merge method.

### Models Merged

The following models were included in the merge:
* [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
    - model: grimjim/kukulemon-7B
      layer_range: [0, 12]
  - sources:
    - model: grimjim/kukulemon-7B
      layer_range: [8, 16]
  - sources:
    - model: grimjim/kukulemon-7B
      layer_range: [12, 32]
merge_method: passthrough
dtype: float16

```