metadata
base_model:
- AGI-0/Artificium-llama3.1-8B-001
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- vicgalle/Configurable-Llama-3.1-8B-Instruct
- Dampfinchen/Llama-3.1-8B-Ultra-Instruct
- NousResearch/Hermes-3-Llama-3.1-8B
- bunnycore/LLama-3.1-Hyper-Stock
library_name: transformers
tags:
- mergekit
- merge
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using vicgalle/Configurable-Llama-3.1-8B-Instruct as a base.
Models Merged
The following models were included in the merge:
- AGI-0/Artificium-llama3.1-8B-001
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- Dampfinchen/Llama-3.1-8B-Ultra-Instruct
- NousResearch/Hermes-3-Llama-3.1-8B
- bunnycore/LLama-3.1-Hyper-Stock
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Dampfinchen/Llama-3.1-8B-Ultra-Instruct
parameters:
density: 0.5
weight: 0.5
- model: bunnycore/LLama-3.1-Hyper-Stock
parameters:
density: 0.5
weight: 0.5
- model: AGI-0/Artificium-llama3.1-8B-001
parameters:
density: 0.5
weight: 0.5
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
density: 0.5
weight: 0.5
- model: NousResearch/Hermes-3-Llama-3.1-8B
parameters:
density: 0.5
weight: 0.5
- model: vicgalle/Configurable-Llama-3.1-8B-Instruct
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: vicgalle/Configurable-Llama-3.1-8B-Instruct
parameters:
normalize: false
int8_mask: true
dtype: float16