aashish1904 commited on
Commit
fafdb2e
β€’
1 Parent(s): d5a99c0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +303 -0
README.md ADDED
@@ -0,0 +1,303 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ language:
5
+ - en
6
+ license: apache-2.0
7
+ library_name: transformers
8
+ tags:
9
+ - merge
10
+ - mergekit
11
+ - lazymergekit
12
+ - creative
13
+ - roleplay
14
+ - instruct
15
+ - qwen
16
+ - model_stock
17
+ - bfloat16
18
+ base_model:
19
+ - newsbang/Homer-v0.5-Qwen2.5-7B
20
+ - allknowingroger/HomerSlerp1-7B
21
+ - bunnycore/Qwen2.5-7B-Instruct-Fusion
22
+ - bunnycore/Qandora-2.5-7B-Creative
23
+ model-index:
24
+ - name: Qwen2.5-7B-HomerCreative-Mix
25
+ results:
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: IFEval (0-Shot)
31
+ type: HuggingFaceH4/ifeval
32
+ args:
33
+ num_few_shot: 0
34
+ metrics:
35
+ - type: inst_level_strict_acc and prompt_level_strict_acc
36
+ value: 78.35
37
+ name: strict accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: BBH (3-Shot)
46
+ type: BBH
47
+ args:
48
+ num_few_shot: 3
49
+ metrics:
50
+ - type: acc_norm
51
+ value: 36.77
52
+ name: normalized accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: MATH Lvl 5 (4-Shot)
61
+ type: hendrycks/competition_math
62
+ args:
63
+ num_few_shot: 4
64
+ metrics:
65
+ - type: exact_match
66
+ value: 32.33
67
+ name: exact match
68
+ source:
69
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
70
+ name: Open LLM Leaderboard
71
+ - task:
72
+ type: text-generation
73
+ name: Text Generation
74
+ dataset:
75
+ name: GPQA (0-shot)
76
+ type: Idavidrein/gpqa
77
+ args:
78
+ num_few_shot: 0
79
+ metrics:
80
+ - type: acc_norm
81
+ value: 6.6
82
+ name: acc_norm
83
+ source:
84
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
85
+ name: Open LLM Leaderboard
86
+ - task:
87
+ type: text-generation
88
+ name: Text Generation
89
+ dataset:
90
+ name: MuSR (0-shot)
91
+ type: TAUR-Lab/MuSR
92
+ args:
93
+ num_few_shot: 0
94
+ metrics:
95
+ - type: acc_norm
96
+ value: 13.77
97
+ name: acc_norm
98
+ source:
99
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: MMLU-PRO (5-shot)
106
+ type: TIGER-Lab/MMLU-Pro
107
+ config: main
108
+ split: test
109
+ args:
110
+ num_few_shot: 5
111
+ metrics:
112
+ - type: acc
113
+ value: 38.3
114
+ name: accuracy
115
+ source:
116
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
117
+ name: Open LLM Leaderboard
118
+
119
+ ---
120
+
121
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
122
+
123
+
124
+ # QuantFactory/Qwen2.5-7B-HomerCreative-Mix-GGUF
125
+ This is quantized version of [ZeroXClem/Qwen2.5-7B-HomerCreative-Mix](https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerCreative-Mix) created using llama.cpp
126
+
127
+ # Original Model Card
128
+
129
+
130
+ # ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
131
+
132
+ **ZeroXClem/Qwen2.5-7B-HomerCreative-Mix** is an advanced language model meticulously crafted by merging four pre-trained models using the powerful [mergekit](https://github.com/cg123/mergekit) framework. This fusion leverages the **Model Stock** merge method to combine the creative prowess of **Qandora**, the instructive capabilities of **Qwen-Instruct-Fusion**, the sophisticated blending of **HomerSlerp1**, and the foundational conversational strengths of **Homer-v0.5-Qwen2.5-7B**. The resulting model excels in creative text generation, contextual understanding, and dynamic conversational interactions.
133
+
134
+ ## πŸš€ Merged Models
135
+
136
+ This model merge incorporates the following:
137
+
138
+ - [**bunnycore/Qandora-2.5-7B-Creative**](https://huggingface.co/bunnycore/Qandora-2.5-7B-Creative): Specializes in creative text generation, enhancing the model's ability to produce imaginative and diverse content.
139
+
140
+ - [**bunnycore/Qwen2.5-7B-Instruct-Fusion**](https://huggingface.co/bunnycore/Qwen2.5-7B-Instruct-Fusion): Focuses on instruction-following capabilities, improving the model's performance in understanding and executing user commands.
141
+
142
+ - [**allknowingroger/HomerSlerp1-7B**](https://huggingface.co/allknowingroger/HomerSlerp1-7B): Utilizes spherical linear interpolation (SLERP) to blend model weights smoothly, ensuring a harmonious integration of different model attributes.
143
+
144
+ - [**newsbang/Homer-v0.5-Qwen2.5-7B**](https://huggingface.co/newsbang/Homer-v0.5-Qwen2.5-7B): Acts as the foundational conversational model, providing robust language comprehension and generation capabilities.
145
+
146
+ ## 🧩 Merge Configuration
147
+
148
+ The configuration below outlines how the models are merged using the **Model Stock** method. This approach ensures a balanced and effective integration of the unique strengths from each source model.
149
+
150
+ ```yaml
151
+ # Merge configuration for ZeroXClem/Qwen2.5-7B-HomerCreative-Mix using Model Stock
152
+
153
+ models:
154
+ - model: bunnycore/Qandora-2.5-7B-Creative
155
+ - model: bunnycore/Qwen2.5-7B-Instruct-Fusion
156
+ - model: allknowingroger/HomerSlerp1-7B
157
+ merge_method: model_stock
158
+ base_model: newsbang/Homer-v0.5-Qwen2.5-7B
159
+ normalize: false
160
+ int8_mask: true
161
+ dtype: bfloat16
162
+ ```
163
+
164
+ ### Key Parameters
165
+
166
+ - **Merge Method (`merge_method`):** Utilizes the **Model Stock** method, as described in [Model Stock](https://arxiv.org/abs/2403.19522), to effectively combine multiple models by leveraging their strengths.
167
+
168
+ - **Models (`models`):** Specifies the list of models to be merged:
169
+ - **bunnycore/Qandora-2.5-7B-Creative:** Enhances creative text generation.
170
+ - **bunnycore/Qwen2.5-7B-Instruct-Fusion:** Improves instruction-following capabilities.
171
+ - **allknowingroger/HomerSlerp1-7B:** Facilitates smooth blending of model weights using SLERP.
172
+
173
+ - **Base Model (`base_model`):** Defines the foundational model for the merge, which is **newsbang/Homer-v0.5-Qwen2.5-7B** in this case.
174
+
175
+ - **Normalization (`normalize`):** Set to `false` to retain the original scaling of the model weights during the merge.
176
+
177
+ - **INT8 Mask (`int8_mask`):** Enabled (`true`) to apply INT8 quantization masking, optimizing the model for efficient inference without significant loss in precision.
178
+
179
+ - **Data Type (`dtype`):** Uses `bfloat16` to maintain computational efficiency while ensuring high precision.
180
+
181
+ ## πŸ† Performance Highlights
182
+
183
+ - **Creative Text Generation:** Enhanced ability to produce imaginative and diverse content suitable for creative writing, storytelling, and content creation.
184
+
185
+ - **Instruction Following:** Improved performance in understanding and executing user instructions, making the model more responsive and accurate in task execution.
186
+
187
+ - **Optimized Inference:** INT8 masking and `bfloat16` data type contribute to efficient computation, enabling faster response times without compromising quality.
188
+
189
+ ## 🎯 Use Case & Applications
190
+
191
+ **ZeroXClem/Qwen2.5-7B-HomerCreative-Mix** is designed to excel in environments that demand both creative generation and precise instruction following. Ideal applications include:
192
+
193
+ - **Creative Writing Assistance:** Aiding authors and content creators in generating imaginative narratives, dialogues, and descriptive text.
194
+
195
+ - **Interactive Storytelling and Role-Playing:** Enhancing dynamic and engaging interactions in role-playing games and interactive storytelling platforms.
196
+
197
+ - **Educational Tools and Tutoring Systems:** Providing detailed explanations, answering questions, and assisting in educational content creation with contextual understanding.
198
+
199
+ - **Technical Support and Customer Service:** Offering accurate and contextually relevant responses in technical support scenarios, improving user satisfaction.
200
+
201
+ - **Content Generation for Marketing:** Creating compelling and diverse marketing copy, social media posts, and promotional material with creative flair.
202
+
203
+ ## πŸ“ Usage
204
+
205
+ To utilize **ZeroXClem/Qwen2.5-7B-HomerCreative-Mix**, follow the steps below:
206
+
207
+ ### Installation
208
+
209
+ First, install the necessary libraries:
210
+
211
+ ```bash
212
+ pip install -qU transformers accelerate
213
+ ```
214
+
215
+ ### Example Code
216
+
217
+ Below is an example of how to load and use the model for text generation:
218
+
219
+ ```python
220
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
221
+ import torch
222
+
223
+ # Define the model name
224
+ model_name = "ZeroXClem/Qwen2.5-7B-HomerCreative-Mix"
225
+
226
+ # Load the tokenizer
227
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
228
+
229
+ # Load the model
230
+ model = AutoModelForCausalLM.from_pretrained(
231
+ model_name,
232
+ torch_dtype=torch.bfloat16,
233
+ device_map="auto"
234
+ )
235
+
236
+ # Initialize the pipeline
237
+ text_generator = pipeline(
238
+ "text-generation",
239
+ model=model,
240
+ tokenizer=tokenizer,
241
+ torch_dtype=torch.bfloat16,
242
+ device_map="auto"
243
+ )
244
+
245
+ # Define the input prompt
246
+ prompt = "Once upon a time in a land far, far away,"
247
+
248
+ # Generate the output
249
+ outputs = text_generator(
250
+ prompt,
251
+ max_new_tokens=150,
252
+ do_sample=True,
253
+ temperature=0.7,
254
+ top_k=50,
255
+ top_p=0.95
256
+ )
257
+
258
+ # Print the generated text
259
+ print(outputs[0]["generated_text"])
260
+ ```
261
+
262
+ ### Notes
263
+
264
+ - **Fine-Tuning:** This merged model may require fine-tuning to optimize performance for specific applications or domains.
265
+
266
+ - **Resource Requirements:** Ensure that your environment has sufficient computational resources, especially GPU-enabled hardware, to handle the model efficiently during inference.
267
+
268
+ - **Customization:** Users can adjust parameters such as `temperature`, `top_k`, and `top_p` to control the creativity and diversity of the generated text.
269
+
270
+
271
+ ## πŸ“œ License
272
+
273
+ This model is open-sourced under the **Apache-2.0 License**.
274
+
275
+ ## πŸ’‘ Tags
276
+
277
+ - `merge`
278
+ - `mergekit`
279
+ - `model_stock`
280
+ - `Qwen`
281
+ - `Homer`
282
+ - `Creative`
283
+ - `ZeroXClem/Qwen2.5-7B-HomerCreative-Mix`
284
+ - `bunnycore/Qandora-2.5-7B-Creative`
285
+ - `bunnycore/Qwen2.5-7B-Instruct-Fusion`
286
+ - `allknowingroger/HomerSlerp1-7B`
287
+ - `newsbang/Homer-v0.5-Qwen2.5-7B`
288
+
289
+ ---
290
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
291
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ZeroXClem__Qwen2.5-7B-HomerCreative-Mix)
292
+
293
+ | Metric |Value|
294
+ |-------------------|----:|
295
+ |Avg. |34.35|
296
+ |IFEval (0-Shot) |78.35|
297
+ |BBH (3-Shot) |36.77|
298
+ |MATH Lvl 5 (4-Shot)|32.33|
299
+ |GPQA (0-shot) | 6.60|
300
+ |MuSR (0-shot) |13.77|
301
+ |MMLU-PRO (5-shot) |38.30|
302
+
303
+