merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Task Arithmetic merge method using meta-llama/Meta-Llama-3-8B-Instruct as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: meta-llama/Meta-Llama-3-8B-Instruct
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: false
slices:
- sources:
- layer_range: [0, 32]
model: meta-llama/Meta-Llama-3-8B-Instruct
- layer_range: [0, 32]
model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
weight: 1.0
- layer_range: [0, 32]
model: failspy/Llama-3-8B-Instruct-MopeyMule
parameters:
weight: -0.15
- Downloads last month
- 10
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for mergekit-community/Perky-Pat-V2
Merge model
this model