Triangle104
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -122,6 +122,86 @@ model-index:
|
|
122 |
This model was converted to GGUF format from [`prithivMLmods/QwQ-LCoT2-7B-Instruct`](https://huggingface.co/prithivMLmods/QwQ-LCoT2-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
123 |
Refer to the [original model card](https://huggingface.co/prithivMLmods/QwQ-LCoT2-7B-Instruct) for more details on the model.
|
124 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
125 |
## Use with llama.cpp
|
126 |
Install llama.cpp through brew (works on Mac and Linux)
|
127 |
|
|
|
122 |
This model was converted to GGUF format from [`prithivMLmods/QwQ-LCoT2-7B-Instruct`](https://huggingface.co/prithivMLmods/QwQ-LCoT2-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
123 |
Refer to the [original model card](https://huggingface.co/prithivMLmods/QwQ-LCoT2-7B-Instruct) for more details on the model.
|
124 |
|
125 |
+
---
|
126 |
+
Model details:
|
127 |
+
-
|
128 |
+
The QwQ-LCoT2-7B-Instruct is a fine-tuned language model
|
129 |
+
designed for advanced reasoning and instruction-following tasks. It
|
130 |
+
leverages the Qwen2.5-7B base model and has been fine-tuned on the chain
|
131 |
+
of thought reasoning datasets, focusing on chain-of-thought (CoT)
|
132 |
+
reasoning for problems. This model is optimized for tasks requiring
|
133 |
+
logical reasoning, detailed explanations, and multi-step
|
134 |
+
problem-solving, making it ideal for applications such as
|
135 |
+
instruction-following, text generation, and complex reasoning tasks.
|
136 |
+
|
137 |
+
Quickstart with Transformers
|
138 |
+
|
139 |
+
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
|
140 |
+
|
141 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
142 |
+
|
143 |
+
model_name = "prithivMLmods/QwQ-LCoT2-7B-Instruct"
|
144 |
+
|
145 |
+
model = AutoModelForCausalLM.from_pretrained(
|
146 |
+
model_name,
|
147 |
+
torch_dtype="auto",
|
148 |
+
device_map="auto"
|
149 |
+
)
|
150 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
151 |
+
|
152 |
+
prompt = "How many r in strawberry."
|
153 |
+
messages = [
|
154 |
+
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
|
155 |
+
{"role": "user", "content": prompt}
|
156 |
+
]
|
157 |
+
text = tokenizer.apply_chat_template(
|
158 |
+
messages,
|
159 |
+
tokenize=False,
|
160 |
+
add_generation_prompt=True
|
161 |
+
)
|
162 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
163 |
+
|
164 |
+
generated_ids = model.generate(
|
165 |
+
**model_inputs,
|
166 |
+
max_new_tokens=512
|
167 |
+
)
|
168 |
+
generated_ids = [
|
169 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
170 |
+
]
|
171 |
+
|
172 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
173 |
+
|
174 |
+
Intended Use
|
175 |
+
|
176 |
+
The QwQ-LCoT2-7B-Instruct model is designed for advanced reasoning
|
177 |
+
and instruction-following tasks, with specific applications including:
|
178 |
+
|
179 |
+
Instruction Following: Providing detailed and step-by-step guidance for a wide range of user queries.
|
180 |
+
Logical Reasoning: Solving problems requiring multi-step thought processes, such as math problems or complex logic-based scenarios.
|
181 |
+
Text Generation: Crafting coherent, contextually relevant, and well-structured text in response to prompts.
|
182 |
+
Problem-Solving: Analyzing and addressing tasks
|
183 |
+
that require chain-of-thought (CoT) reasoning, making it ideal for
|
184 |
+
education, tutoring, and technical support.
|
185 |
+
Knowledge Enhancement: Leveraging reasoning datasets to offer deeper insights and explanations for a wide variety of topics.
|
186 |
+
|
187 |
+
Limitations
|
188 |
+
|
189 |
+
Data Bias: As the model is fine-tuned on specific datasets, its outputs may reflect inherent biases from the training data.
|
190 |
+
Context Limitation: Performance may degrade for
|
191 |
+
tasks requiring knowledge or reasoning that significantly exceeds the
|
192 |
+
model's pretraining or fine-tuning context.
|
193 |
+
Complexity Ceiling: While optimized for multi-step
|
194 |
+
reasoning, exceedingly complex or abstract problems may result in
|
195 |
+
incomplete or incorrect outputs.
|
196 |
+
Dependency on Prompt Quality: The quality and specificity of the user prompt heavily influence the model's responses.
|
197 |
+
Non-Factual Outputs: Despite being fine-tuned for
|
198 |
+
reasoning, the model can still generate hallucinated or factually
|
199 |
+
inaccurate content, particularly for niche or unverified topics.
|
200 |
+
Computational Requirements: Running the model
|
201 |
+
effectively requires significant computational resources, particularly
|
202 |
+
when generating long sequences or handling high-concurrency workloads.
|
203 |
+
|
204 |
+
---
|
205 |
## Use with llama.cpp
|
206 |
Install llama.cpp through brew (works on Mac and Linux)
|
207 |
|