Safetensors
English
mixtral
File size: 5,124 Bytes
b351190
 
 
 
 
 
 
 
 
 
 
 
 
4b0eb49
b351190
 
 
 
 
 
 
d3cf0c9
 
b351190
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f15c260
b351190
8a0a039
b351190
f15c260
 
b351190
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
---
license: apache-2.0
datasets:
- jeiku/Writing
- FourOhFour/RP_Phase
- anthracite-core/full-opus-chosen-hermes-rejected-kto-v1
language:
- en
base_model:
- IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml
---
## Aura-MoE-2x4B-v2

![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/zyGqa-iH77dgU9D8WvoXY.png)

## Introduction

**Aura-MoE-2x4B-v2** is a state of the art dedicated roleplaying model designed to fulfill your every desire.

The finetunes used in this merge saw several hundreds of millions of tokens of instruction data. The merge was then healed on 150 million tokens of roleplaying data. A Kahneman-Tversky Optimization was applied to the healed model to give it a unique output style.

By the numbers, this should be a direct improvement over **[Aura-MoE-2x4B](https://huggingface.co/AuraIndustries/Aura-MoE-2x4B)**

Developed by **Aura Industries**, with contributions from **Anthracite Org**

## Model Details

- **Model Name**: Aura-MoE-2x4B-v2
- **Base Model**: [IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml)
- **Model Type**: Chat Completions
- **Prompt Format**: ChatML
- **License**: Apache-2.0
- **Language**: English
- **Max Context**: 8,192+ tokens

## License

This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).

## Quantizations 

[Static GGUF](https://huggingface.co/mradermacher/Aura-MoE-2x4B-v2-GGUF)

[Imatrix GGUF](https://huggingface.co/mradermacher/Aura-MoE-2x4B-v2-i1-GGUF)

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)

Coming soon...

|      Metric       |Value|
|-------------------|----:|
|Avg.               |  N/A|
|IFEval (0-Shot)    |  N/A|
|BBH (3-Shot)       |  N/A|
|MATH Lvl 5 (4-Shot)|  N/A|
|GPQA (0-shot)      |  N/A|
|MuSR (0-shot)      |  N/A|
|MMLU-PRO (5-shot)  |  N/A|

## Training Configuration

<details><summary>Click here for Mergekit and Axolotl configs</summary>

MoE Merge

```yaml
base_model: FourOhFour/Zenith_4B
gate_mode: random
dtype: bfloat16
experts_per_token: 1
experts:
  - source_model: FourOhFour/Luxe_4B
  - source_model: FourOhFour/Zenith_4B
```

SFT

```yaml
base_model: jeiku/MoEv2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: FourOhFour/RP_Phase
    type: chat_template
    chat_template: chatml
    roles_to_train: ["gpt"]
    field_messages: conversations
    message_field_role: from
    message_field_content: value
    train_on_eos: turn
  - path: jeiku/Writing
    type: completion
    field: text

chat_template: chatml

shuffle_merged_datasets: true
dataset_prepared_path:
val_set_size: 0.01
output_dir: ./output/out

hub_model_id: jeiku/Aura-MoEv2
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true

sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len:

wandb_project: Aura-MoEv2
wandb_entity:
wandb_watch:
wandb_name: Aura-MoEv2
wandb_log_model:

gradient_accumulation_steps: 16
micro_batch_size: 2
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.00005

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch: 2
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 1
debug:
deepspeed: 
weight_decay: 0.05
fsdp:
fsdp_config:
special_tokens:
  pad_token: <|finetune_right_pad_id|>
```

KTO

```yaml
base_model: jeiku/Aura-MoEv2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

load_in_8bit: false
load_in_4bit: false
strict: false

hub_model_id: jeiku/moekto
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true

chat_template: chatml

rl: kto
rl_beta: 0.2
kto_desirable_weight: 0.2

datasets:
  - path: anthracite-core/full-opus-chosen-hermes-rejected-kto-v1
    type: chatml.argilla

shuffle_merged_datasets: true
val_set_size: 0.0
output_dir: ./outputs/out

sequence_len: 8192
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: false

wandb_project: moekto
wandb_entity:
wandb_watch:
wandb_name: moekto
wandb_log_model:

gradient_accumulation_steps: 16
micro_batch_size: 2
num_epochs: 2
max_steps: 500

optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 0.00001
weight_decay: 0.05

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: true
remove_unused_columns: false
early_stopping_patience:
resume_from_checkpoint: 
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch: 2
eval_table_size:
eval_max_new_tokens: 
saves_per_epoch: 1

debug:
deepspeed: 
fsdp:
fsdp_config:
fsdp:
fsdp_config:

special_tokens:
  pad_token: <|finetune_right_pad_id|>
```
</details><br>