metadata
base_model:
- Solshine/reflection-llama-3.1-8B
- Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
- mlabonne/Hermes-3-Llama-3.1-8B-lorablated
library_name: transformers
tags:
- mergekit
- merge
State of the art for size on Open LLM Leaderboard
SOTA as of 9/30/2024, viewable at open-llm-leaderboard/Solshine__Llama-3-1-big-thoughtful-passthrough-merge-2-details
merge
This is a merge of pre-trained language models created using mergekit.
Score from Open LLM Leaderboard
In it's size range, as of 9/30/2024, this model is SOTA. "Normalized accuracy (acc_norm): 0.3157 or 31.57%" "Instance-level loose accuracy (inst_level_loose_acc): 0.3345 or 33.45%"
Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
- Solshine/reflection-llama-3.1-8B
- Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
- mlabonne/Hermes-3-Llama-3.1-8B-lorablated
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- layer_range: [0, 16]
model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated
- sources:
- layer_range: [4, 20]
model: Solshine/reflection-llama-3.1-8B
- sources:
- layer_range: [8, 24]
model: Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
- sources:
- layer_range: [12, 28]
model: Solshine/reflection-llama-3.1-8B
- sources:
- layer_range: [16, 32]
model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated
merge_method: passthrough
dtype: float16