lucyknada commited on
Commit
22d18b5
1 Parent(s): 3422d23

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +214 -0
README.md CHANGED
@@ -1 +1,215 @@
 
 
 
 
 
 
 
 
 
1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - chat
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ ---
10
 
11
+ ## This repo contains EXL2 quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-v4-72b).
12
+ ## Base repo only contains the measurement file, see revisions for your quant of choice.
13
+
14
+ - [measurement.json](https://huggingface.co/anthracite-org/magnum-v4-72b-exl2/tree/main)
15
+ - [3.0bpw](https://huggingface.co/anthracite-org/magnum-v4-72b-exl2/tree/3.0bpw)
16
+ - [4.0bpw](https://huggingface.co/anthracite-org/magnum-v4-72b-exl2/tree/4.0bpw)
17
+ - [5.0bpw](https://huggingface.co/anthracite-org/magnum-v4-72b-exl2/tree/5.0bpw)
18
+ - [6.0bpw](https://huggingface.co/anthracite-org/magnum-v4-72b-exl2/tree/6.0bpw)
19
+ - [8.0bpw](https://huggingface.co/anthracite-org/magnum-v4-72b-exl2/tree/8.0bpw)
20
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/ZmOOkB2QwItLmoqmnxNWO.png)
21
+ ## This repo contains GGUF quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-v4-72b).
22
+
23
+ This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
24
+
25
+ experimental because trained on top of instruct; but turned out amazing; hence code named magnum-alter, the original model that kickstarted the v4 family
26
+
27
+ This model is fine-tuned on top of [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct).
28
+
29
+ ## Prompting
30
+ A typical input would look like this:
31
+
32
+ ```py
33
+ <|im_start|>system
34
+ system prompt<|im_end|>
35
+ <|im_start|>user
36
+ Hi there!<|im_end|>
37
+ <|im_start|>assistant
38
+ Nice to meet you!<|im_end|>
39
+ <|im_start|>user
40
+ Can I ask a question?<|im_end|>
41
+ <|im_start|>assistant
42
+ ```
43
+
44
+ ## SillyTavern templates
45
+
46
+ Below are Instruct and Context templates for use within SillyTavern.
47
+
48
+ <details><summary>context template</summary>
49
+
50
+ ```yaml
51
+ {
52
+ "story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
53
+ "example_separator": "",
54
+ "chat_start": "",
55
+ "use_stop_strings": false,
56
+ "allow_jailbreak": false,
57
+ "always_force_name2": true,
58
+ "trim_sentences": false,
59
+ "include_newline": false,
60
+ "single_line": false,
61
+ "name": "Magnum ChatML"
62
+ }
63
+ ```
64
+
65
+ </details><br>
66
+ <details><summary>instruct template</summary>
67
+
68
+ ```yaml
69
+ {
70
+ "system_prompt": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as "!" and "~" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
71
+ "input_sequence": "<|im_start|>user\n",
72
+ "output_sequence": "<|im_start|>assistant\n",
73
+ "last_output_sequence": "",
74
+ "system_sequence": "<|im_start|>system\n",
75
+ "stop_sequence": "<|im_end|>",
76
+ "wrap": false,
77
+ "macro": true,
78
+ "names": true,
79
+ "names_force_groups": true,
80
+ "activation_regex": "",
81
+ "system_sequence_prefix": "",
82
+ "system_sequence_suffix": "",
83
+ "first_output_sequence": "",
84
+ "skip_examples": false,
85
+ "output_suffix": "<|im_end|>\n",
86
+ "input_suffix": "<|im_end|>\n",
87
+ "system_suffix": "<|im_end|>\n",
88
+ "user_alignment_message": "",
89
+ "system_same_as_user": false,
90
+ "last_system_sequence": "",
91
+ "name": "Magnum ChatML"
92
+ }
93
+ ```
94
+
95
+ </details><br>
96
+
97
+ ## Axolotl config
98
+
99
+ <details><summary>See axolotl config</summary>
100
+
101
+ ```yaml
102
+ base_model: /workspace/data/models/Qwen2.5-72B-Instruct
103
+ model_type: AutoModelForCausalLM
104
+ tokenizer_type: AutoTokenizer
105
+
106
+ plugins:
107
+ - axolotl.integrations.liger.LigerPlugin
108
+ liger_rope: true
109
+ liger_rms_norm: true
110
+ liger_swiglu: true
111
+ liger_fused_linear_cross_entropy: true
112
+
113
+ load_in_8bit: false
114
+ load_in_4bit: false
115
+ strict: false
116
+
117
+ datasets:
118
+ - path: anthracite-org/c2_logs_32k_llama3_qwen2_v1.2
119
+ type: sharegpt
120
+ conversation: chatml
121
+ - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
122
+ type: sharegpt
123
+ conversation: chatml
124
+ - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
125
+ type: sharegpt
126
+ conversation: chatml
127
+ - path: anthracite-org/nopm_claude_writing_fixed
128
+ type: sharegpt
129
+ conversation: chatml
130
+ - path: anthracite-org/kalo_opus_misc_240827
131
+ type: sharegpt
132
+ conversation: chatml
133
+ - path: anthracite-org/kalo_misc_part2
134
+ type: sharegpt
135
+ conversation: chatml
136
+ #chat_template: chatml
137
+ shuffle_merged_datasets: true
138
+ #default_system_message: "You are an assistant that responds to the user."
139
+ dataset_prepared_path: /workspace/data/magnum-72b-data
140
+ val_set_size: 0.0
141
+ output_dir: /workspace/data/72b-fft-out
142
+
143
+ sequence_len: 32768
144
+ sample_packing: true
145
+ pad_to_sequence_len: true
146
+
147
+ adapter:
148
+ lora_model_dir:
149
+ lora_r:
150
+ lora_alpha:
151
+ lora_dropout:
152
+ lora_target_linear:
153
+ lora_fan_in_fan_out:
154
+
155
+ wandb_project: 72b-magnum-fft
156
+ wandb_entity:
157
+ wandb_watch:
158
+ wandb_name: alter-attempt-01
159
+ wandb_log_model:
160
+
161
+ gradient_accumulation_steps: 2
162
+ micro_batch_size: 1
163
+ num_epochs: 2
164
+ optimizer: adamw_bnb_8bit
165
+ lr_scheduler: cosine
166
+ learning_rate: 0.000004
167
+
168
+ train_on_inputs: false
169
+ group_by_length: false
170
+ bf16: auto
171
+ fp16:
172
+ tf32: false
173
+
174
+ gradient_checkpointing: true
175
+ early_stopping_patience:
176
+ resume_from_checkpoint:
177
+ local_rank:
178
+ logging_steps: 1
179
+ xformers_attention:
180
+ flash_attention: true
181
+
182
+ warmup_steps: 40
183
+ evals_per_epoch:
184
+ eval_table_size:
185
+ eval_max_new_tokens:
186
+ saves_per_epoch: 2
187
+ debug:
188
+ deepspeed: deepspeed_configs/zero3_bf16.json
189
+ weight_decay: 0.01
190
+ fsdp:
191
+ fsdp_config:
192
+ special_tokens:
193
+ ```
194
+ </details><br>
195
+
196
+ ## Credits
197
+ We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
198
+
199
+ We would also like to thank all members of Anthracite who made this finetune possible.
200
+
201
+ ## Datasets
202
+ - [anthracite-org/c2_logs_32k_llama3_qwen2_v1.2](https://huggingface.co/datasets/anthracite-org/c2_logs_32k_llama3_qwen2_v1.2)
203
+ - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
204
+ - [lodrick-the-lafted/kalo-opus-instruct-3k-filtered](https://huggingface.co/datasets/lodrick-the-lafted/kalo-opus-instruct-3k-filtered)
205
+ - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
206
+ - [anthracite-org/kalo_opus_misc_240827](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827)
207
+ - [anthracite-org/kalo_misc_part2](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2)
208
+
209
+ ## Training
210
+ The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
211
+
212
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
213
+
214
+ ## Safety
215
+ ...