munish0838's picture
Upload README.md with huggingface_hub
d4362b1 verified
|
raw
history blame
1.76 kB
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- vicgalle/Configurable-Llama-3.1-8B-Instruct
- bunnycore/HyperLlama-3.1-8B
- ValiantLabs/Llama3.1-8B-ShiningValiant2
---
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
# QuantFactory/HyperLlama3.1-v2-GGUF
This is quantized version of [bunnycore/HyperLlama3.1-v2](https://huggingface.co/bunnycore/HyperLlama3.1-v2) created using llama.cpp
# Original Model Card
# HyperLlama3.1-v2
HyperLlama3.1-v2 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [vicgalle/Configurable-Llama-3.1-8B-Instruct](https://huggingface.co/vicgalle/Configurable-Llama-3.1-8B-Instruct)
* [bunnycore/HyperLlama-3.1-8B](https://huggingface.co/bunnycore/HyperLlama-3.1-8B)
* [ValiantLabs/Llama3.1-8B-ShiningValiant2](https://huggingface.co/ValiantLabs/Llama3.1-8B-ShiningValiant2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: vicgalle/Configurable-Llama-3.1-8B-Instruct
parameters:
weight: 1
layer_range: [0, 32]
- model: bunnycore/HyperLlama-3.1-8B
parameters:
weight: 0.9
layer_range: [0, 32]
- model: ValiantLabs/Llama3.1-8B-ShiningValiant2
parameters:
weight: 0.6
layer_range: [0, 32]
merge_method: task_arithmetic
base_model: bunnycore/HyperLlama-3.1-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```