File size: 2,359 Bytes
8c3f1eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
base_model:
- Nekochu/Luminia-8B-RP
- ResplendentAI/Smarts_Llama3
- refuelai/Llama-3-Refueled
- Blackroot/Llama-3-8B-Abomination-LORA
- akjindal53244/Llama-3.1-Storm-8B
- kloodia/lora-8b-physic
- Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base
- Replete-AI/L3-Pneuma-8B
- ResplendentAI/NoWarning_Llama3
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- kloodia/lora-8b-bio
library_name: transformers
tags:
- mergekit
- merge

---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base](https://huggingface.co/Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base) as a base.

### Models Merged

The following models were included in the merge:
* [Nekochu/Luminia-8B-RP](https://huggingface.co/Nekochu/Luminia-8B-RP) + [ResplendentAI/Smarts_Llama3](https://huggingface.co/ResplendentAI/Smarts_Llama3)
* [refuelai/Llama-3-Refueled](https://huggingface.co/refuelai/Llama-3-Refueled) + [Blackroot/Llama-3-8B-Abomination-LORA](https://huggingface.co/Blackroot/Llama-3-8B-Abomination-LORA)
* [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) + [kloodia/lora-8b-physic](https://huggingface.co/kloodia/lora-8b-physic)
* [Replete-AI/L3-Pneuma-8B](https://huggingface.co/Replete-AI/L3-Pneuma-8B) + [ResplendentAI/NoWarning_Llama3](https://huggingface.co/ResplendentAI/NoWarning_Llama3)
* [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2) + [kloodia/lora-8b-bio](https://huggingface.co/kloodia/lora-8b-bio)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2+kloodia/lora-8b-bio
  - model: akjindal53244/Llama-3.1-Storm-8B+kloodia/lora-8b-physic
  - model: refuelai/Llama-3-Refueled+Blackroot/Llama-3-8B-Abomination-LORA
  - model: Replete-AI/L3-Pneuma-8B+ResplendentAI/NoWarning_Llama3 
  - model: Nekochu/Luminia-8B-RP+ResplendentAI/Smarts_Llama3
merge_method: model_stock
base_model: Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base
normalize: false
int8_mask: true
dtype: bfloat16

```