divinetaco commited on
Commit
50cece1
1 Parent(s): bc22520

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -8,12 +8,12 @@ tags:
8
  - mergekit
9
  - merge
10
  ---
11
- # aranea-ancilla-70b-v1.0
12
  **aka MiquMaid-v1-70B + interleaved WinterGoddess-1.4x-70B-L2**
13
 
14
  ![image/png](https://huggingface.co/divinetaco/aranea-ancilla-116b-v1.0/resolve/main/aranea-ancilla.png)
15
 
16
- A [mergekit](https://github.com/arcee-ai/mergekit) frankenmerge based on [MiquMaid-v1-70B](https://huggingface.co/NeverSleep/MiquMaid-v1-70B) with interleaved layers of [Sao10K/WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2).
17
  This was the top performing model from a series of merge experiments to create a highly coherant creative writing model.
18
 
19
  Tests consisted of a series of private benchmarks and manual comparisons. A number of different base models, interleave models and layer offsets were compared.
@@ -33,7 +33,7 @@ No license. Component models based on the [Mistral AI Miqu-1](https://huggingfac
33
  ### Interesting observations from benchmarking
34
 
35
  - 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternatives combinations.
36
- - Offsetting the interleaved model's first set of layers generally improved coherency. [14-30] reliably beat the [10-30] mergekit slice configuration for combinations of models.
37
  - Quality of resulting merges can vary wildly. Whilst a merge of two strong models tends to produce a strong frankenstein model, this rule does not always hold true.
38
 
39
  ### Quantizations
 
8
  - mergekit
9
  - merge
10
  ---
11
+ # aranea-ancilla-116b-v1.0
12
  **aka MiquMaid-v1-70B + interleaved WinterGoddess-1.4x-70B-L2**
13
 
14
  ![image/png](https://huggingface.co/divinetaco/aranea-ancilla-116b-v1.0/resolve/main/aranea-ancilla.png)
15
 
16
+ A [mergekit](https://github.com/arcee-ai/mergekit) frankenmerge based on [NeverSleep/MiquMaid-v1-70B](https://huggingface.co/NeverSleep/MiquMaid-v1-70B) with interleaved layers of [Sao10K/WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2).
17
  This was the top performing model from a series of merge experiments to create a highly coherant creative writing model.
18
 
19
  Tests consisted of a series of private benchmarks and manual comparisons. A number of different base models, interleave models and layer offsets were compared.
 
33
  ### Interesting observations from benchmarking
34
 
35
  - 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternatives combinations.
36
+ - Offsetting the interleaved model's first set of layers generally improved coherency. [14-30] reliably beat the [10-30] mergekit slice configuration for various combinations of models.
37
  - Quality of resulting merges can vary wildly. Whilst a merge of two strong models tends to produce a strong frankenstein model, this rule does not always hold true.
38
 
39
  ### Quantizations