Smokeweaver commited on
Commit
1c2fd1b
·
verified ·
1 Parent(s): 039bf31

Add model Card

Browse files
Files changed (1) hide show
  1. README.md +225 -1
README.md CHANGED
@@ -1,3 +1,227 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: cc-by-nc-4.0
5
+ tags:
6
+ - merge
7
+ - lazymergekit
8
+ - dpo
9
+ - rlhf
10
+ dataset:
11
+ - mlabonne/truthy-dpo-v0.1
12
+ - mlabonne/distilabel-intel-orca-dpo-pairs
13
+ base_model:
14
+ - mlabonne/Monarch-7B
15
+ model-index:
16
+ - name: NeuralMonarch-7B
17
+ results:
18
+ - task:
19
+ type: text-generation
20
+ name: Text Generation
21
+ dataset:
22
+ name: AI2 Reasoning Challenge (25-Shot)
23
+ type: ai2_arc
24
+ config: ARC-Challenge
25
+ split: test
26
+ args:
27
+ num_few_shot: 25
28
+ metrics:
29
+ - type: acc_norm
30
+ value: 73.21
31
+ name: normalized accuracy
32
+ source:
33
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMonarch-7B
34
+ name: Open LLM Leaderboard
35
+ - task:
36
+ type: text-generation
37
+ name: Text Generation
38
+ dataset:
39
+ name: HellaSwag (10-Shot)
40
+ type: hellaswag
41
+ split: validation
42
+ args:
43
+ num_few_shot: 10
44
+ metrics:
45
+ - type: acc_norm
46
+ value: 89.09
47
+ name: normalized accuracy
48
+ source:
49
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMonarch-7B
50
+ name: Open LLM Leaderboard
51
+ - task:
52
+ type: text-generation
53
+ name: Text Generation
54
+ dataset:
55
+ name: MMLU (5-Shot)
56
+ type: cais/mmlu
57
+ config: all
58
+ split: test
59
+ args:
60
+ num_few_shot: 5
61
+ metrics:
62
+ - type: acc
63
+ value: 64.41
64
+ name: accuracy
65
+ source:
66
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMonarch-7B
67
+ name: Open LLM Leaderboard
68
+ - task:
69
+ type: text-generation
70
+ name: Text Generation
71
+ dataset:
72
+ name: TruthfulQA (0-shot)
73
+ type: truthful_qa
74
+ config: multiple_choice
75
+ split: validation
76
+ args:
77
+ num_few_shot: 0
78
+ metrics:
79
+ - type: mc2
80
+ value: 77.79
81
+ source:
82
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMonarch-7B
83
+ name: Open LLM Leaderboard
84
+ - task:
85
+ type: text-generation
86
+ name: Text Generation
87
+ dataset:
88
+ name: Winogrande (5-shot)
89
+ type: winogrande
90
+ config: winogrande_xl
91
+ split: validation
92
+ args:
93
+ num_few_shot: 5
94
+ metrics:
95
+ - type: acc
96
+ value: 84.61
97
+ name: accuracy
98
+ source:
99
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMonarch-7B
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: GSM8k (5-shot)
106
+ type: gsm8k
107
+ config: main
108
+ split: test
109
+ args:
110
+ num_few_shot: 5
111
+ metrics:
112
+ - type: acc
113
+ value: 67.78
114
+ name: accuracy
115
+ source:
116
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMonarch-7B
117
+ name: Open LLM Leaderboard
118
+ library_name: transformers
119
+ model_creator: mlabonne
120
+ model_name: NeuralMonarch-7B
121
+ model_type: mistral
122
+ pipeline_tag: text-generation
123
+ inference: false
124
+ prompt_template: '<|im_start|>system
125
+
126
+ {system_message}<|im_end|>
127
+
128
+ <|im_start|>user
129
+
130
+ {prompt}<|im_end|>
131
+
132
+ <|im_start|>assistant
133
+
134
+ '
135
+ quantized_by: Suparious
136
  ---
137
+ # mlabonne/NeuralMonarch-7B AWQ
138
+
139
+ - Model creator: [mlabonne](https://huggingface.co/mlabonne)
140
+ - Original model: [NeuralMonarch-7B](https://huggingface.co/mlabonne/NeuralMonarch-7B)
141
+
142
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/LxRUvkSATmy-UDKN54Q3H.jpeg)
143
+
144
+ ## Model Summary
145
+
146
+ NeuralMonarch-7B is a DPO fine-tuned of [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B/) using the [jondurbin/truthy-dpo-v0.1](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1) and [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference datasets.
147
+
148
+ It is based on a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
149
+ * [mlabonne/OmniTruthyBeagle-7B-v0](https://huggingface.co/mlabonne/OmniTruthyBeagle-7B-v0)
150
+ * [mlabonne/NeuBeagle-7B](https://huggingface.co/mlabonne/NeuBeagle-7B)
151
+ * [mlabonne/NeuralOmniBeagle-7B](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B)
152
+
153
+ Special thanks to [Jon Durbin](https://huggingface.co/jondurbin), [Intel](https://huggingface.co/Intel), and [Argilla](https://huggingface.co/argilla) for the preference datasets.
154
+
155
+ **Try the demo**: https://huggingface.co/spaces/mlabonne/NeuralMonarch-7B-GGUF-Chat
156
+
157
+ ## How to use
158
+
159
+ ### Install the necessary packages
160
+
161
+ ```bash
162
+ pip install --upgrade autoawq autoawq-kernels
163
+ ```
164
+
165
+ ### Example Python code
166
+
167
+ ```python
168
+ from awq import AutoAWQForCausalLM
169
+ from transformers import AutoTokenizer, TextStreamer
170
+
171
+ model_path = "solidrust/NeuralMonarch-7B-AWQ"
172
+ system_message = "You are Monarch, incarnated as a powerful AI."
173
+
174
+ # Load model
175
+ model = AutoAWQForCausalLM.from_quantized(model_path,
176
+ fuse_layers=True)
177
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
178
+ trust_remote_code=True)
179
+ streamer = TextStreamer(tokenizer,
180
+ skip_prompt=True,
181
+ skip_special_tokens=True)
182
+
183
+ # Convert prompt to tokens
184
+ prompt_template = """\
185
+ <|im_start|>system
186
+ {system_message}<|im_end|>
187
+ <|im_start|>user
188
+ {prompt}<|im_end|>
189
+ <|im_start|>assistant"""
190
+
191
+ prompt = "You're standing on the surface of the Earth. "\
192
+ "You walk one mile south, one mile west and one mile north. "\
193
+ "You end up exactly where you started. Where are you?"
194
+
195
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
196
+ return_tensors='pt').input_ids.cuda()
197
+
198
+ # Generate output
199
+ generation_output = model.generate(tokens,
200
+ streamer=streamer,
201
+ max_new_tokens=512)
202
+
203
+ ```
204
+
205
+ ### About AWQ
206
+
207
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
208
+
209
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
210
+
211
+ It is supported by:
212
+
213
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
214
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
215
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
216
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
217
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
218
+
219
+ ## Prompt template: ChatML
220
+
221
+ ```plaintext
222
+ <|im_start|>system
223
+ {system_message}<|im_end|>
224
+ <|im_start|>user
225
+ {prompt}<|im_end|>
226
+ <|im_start|>assistant
227
+ ```