metadata
base_model:
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- arcee-ai/Llama-3.1-SuperNova-Lite
- VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
- unsloth/Llama-3.1-Storm-8B
- DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
- unsloth/Meta-Llama-3.1-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
Untitled Model (1)
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using unsloth/Meta-Llama-3.1-8B-Instruct as a base.
Models Merged
The following models were included in the merge:
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- arcee-ai/Llama-3.1-SuperNova-Lite
- VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
- unsloth/Llama-3.1-Storm-8B
- DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
Configuration
The following YAML configuration was used to produce this model:
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
dtype: bfloat16
merge_method: dare_ties
parameters:
int8_mask: 1.0
normalize: 1.0
random_seed: 145.0
slices:
- sources:
- layer_range: [0, 32]
model: unsloth/Llama-3.1-Storm-8B
parameters:
density: 0.95
weight:
- filter: self_attn.o_proj
value: 0.0
- filter: mlp.down_proj
value: 0.0
- filter: layers.19.
value: 0.0
- value: 0.28
- layer_range: [0, 32]
model: arcee-ai/Llama-3.1-SuperNova-Lite
parameters:
density: 0.9
weight:
- filter: self_attn.o_proj
value: 0.0
- filter: mlp.down_proj
value: 0.0
- filter: layers.19.
value: 0.0
- value: 0.27
- layer_range: [0, 32]
model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
parameters:
density: 0.92
weight:
- filter: self_attn.o_proj
value: 0.0
- filter: mlp.down_proj
value: 0.0
- filter: layers.19.
value: 0.0
- value: 0.25
- layer_range: [0, 32]
model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
density: 0.92
weight:
- filter: self_attn.o_proj
value: 0.0
- filter: mlp.down_proj
value: 0.0
- filter: layers.19.
value: 0.0
- value: 0.2
- layer_range: [0, 32]
model: DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
parameters:
density: 0.98
weight:
- filter: self_attn.o_proj
value: 1.0
- filter: mlp.down_proj
value: 1.0
- filter: layers.19.
value: 1.0
- value: 0.0
- layer_range: [0, 32]
model: unsloth/Meta-Llama-3.1-8B-Instruct
tokenizer:
tokens:
<|begin_of_text|>:
force: true
source: unsloth/Meta-Llama-3.1-8B-Instruct
<|eot_id|>:
force: true
source: unsloth/Meta-Llama-3.1-8B-Instruct
<|finetune_right_pad_id|>:
force: true
source: unsloth/Meta-Llama-3.1-8B-Instruct