Llama3.1-8B-Titanium-Forge / mergekit_config.yml
ZeroXClem's picture
Upload folder using huggingface_hub
32c478d verified
raw
history blame contribute delete
433 Bytes
base_model: ValiantLabs/Llama3.1-8B-ShiningValiant2
dtype: float32
merge_method: model_stock
slices:
- sources:
- layer_range: [0, 32]
model: ValiantLabs/Llama3.1-8B-Cobalt
- layer_range: [0, 32]
model: ValiantLabs/Llama3.1-8B-Fireplace2
- layer_range: [0, 32]
model: ValiantLabs/Llama3.1-8B-Enigma
- layer_range: [0, 32]
model: ValiantLabs/Llama3.1-8B-ShiningValiant2
out_dtype: bfloat16