File size: 1,336 Bytes
7d3e8cd b1b1b9f 17be4aa 7d3e8cd b1b1b9f 17be4aa b1b1b9f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- aurelian
- WinterGoddess
- frankenmerge
- 120b
- 32k
---
# BigAurelian v0.5 120b 32k
A Goliath-120b style frankenmerge of aurelian-v0.5-70b-32K and WinterGoddess-1.4x-70b. The goal is to have similar performance with an extended context size. **Important:** Use a positional embeddings compression factor (**compress_pos_emb**) of **8** when loading this model.
# Prompting Format
Llama2 and Alpaca.
# Merge process
The models used in the merge are [aurelian-v0.5-70b-32K](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16) and [WinterGoddess-1.4x-70b](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2).
The layer mix:
```yaml
- range 0, 16
aurelian
- range 8, 24
WinterGoddess
- range 17, 32
aurelian
- range 25, 40
WinterGoddess
- range 33, 48
aurelian
- range 41, 56
WinterGoddess
- range 49, 64
aurelian
- range 57, 72
WinterGoddess
- range 65, 80
aurelian
```
# Acknowledgements
[@grimulkan](https://huggingface.co/grimulkan) For creating grimulkan
[@Sao10K](https://huggingface.co/Sao10K) For creating WinterGoddess
[@alpindale](https://huggingface.co/alpindale) For creating the original Goliath
[@chargoddard](https://huggingface.co/chargoddard) For developing [mergekit](https://github.com/cg123/mergekit).
|