File size: 2,185 Bytes
282336e e01fdde 282336e e01fdde 282336e e01fdde |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
base_model:
- Hastagaras/Halu-OAS-8B-Llama3
- openlynn/Llama-3-Soliloquy-8B-v2
- grimjim/llama-3-aaditya-OpenBioLLM-8B
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- mlabonne/NeuralDaredevil-8B-abliterated
library_name: transformers
tags:
- mergekit
- merge
license: llama3
license_link: LICENSE
---
# Llama-3-Steerpike-v1-OAS-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model might result in characters who are "too" smart if conversation veers into the analytical, but that may be fine depending on the context.
Tested lightly with Instruct prompts, minP=0.01, and temperature 1+.
Built with Meta Llama 3.
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [Hastagaras/Halu-OAS-8B-Llama3](https://huggingface.co/Hastagaras/Halu-OAS-8B-Llama3)
* [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2)
* [grimjim/llama-3-aaditya-OpenBioLLM-8B](https://huggingface.co/grimjim/llama-3-aaditya-OpenBioLLM-8B)
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mlabonne/NeuralDaredevil-8B-abliterated
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: mlabonne/NeuralDaredevil-8B-abliterated
- layer_range: [0, 32]
model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
weight: 0.5
- layer_range: [0, 32]
model: Hastagaras/Halu-OAS-8B-Llama3
parameters:
weight: 0.2
- layer_range: [0, 32]
model: openlynn/Llama-3-Soliloquy-8B-v2
parameters:
weight: 0.03
- layer_range: [0, 32]
model: grimjim/llama-3-aaditya-OpenBioLLM-8B
parameters:
weight: 0.1
```
|