File size: 8,284 Bytes
1e5b8ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ee8c61f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e5b8ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
---
license: apache-2.0
language:
- en
base_model: prithivMLmods/QwQ-LCoT2-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- LCoT
- Qwen
- v2
- llama-cpp
- gguf-my-repo
datasets:
- PowerInfer/QWQ-LONGCOT-500K
- AI-MO/NuminaMath-CoT
- prithivMLmods/Math-Solve
- amphora/QwQ-LongCoT-130K
- prithivMLmods/Deepthink-Reasoning
model-index:
- name: QwQ-LCoT2-7B-Instruct
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: IFEval (0-Shot)
      type: wis-k/instruction-following-eval
      split: train
      args:
        num_few_shot: 0
    metrics:
    - type: inst_level_strict_acc and prompt_level_strict_acc
      value: 55.76
      name: averaged accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: BBH (3-Shot)
      type: SaylorTwift/bbh
      split: test
      args:
        num_few_shot: 3
    metrics:
    - type: acc_norm
      value: 34.37
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MATH Lvl 5 (4-Shot)
      type: lighteval/MATH-Hard
      split: test
      args:
        num_few_shot: 4
    metrics:
    - type: exact_match
      value: 22.21
      name: exact match
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-shot)
      type: Idavidrein/gpqa
      split: train
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 6.38
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-shot)
      type: TAUR-Lab/MuSR
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 15.75
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU-PRO (5-shot)
      type: TIGER-Lab/MMLU-Pro
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 37.13
      name: accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FQwQ-LCoT2-7B-Instruct
      name: Open LLM Leaderboard
---

# Triangle104/QwQ-LCoT2-7B-Instruct-Q5_K_S-GGUF
This model was converted to GGUF format from [`prithivMLmods/QwQ-LCoT2-7B-Instruct`](https://huggingface.co/prithivMLmods/QwQ-LCoT2-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/QwQ-LCoT2-7B-Instruct) for more details on the model.

---
Model details:
-
The QwQ-LCoT2-7B-Instruct is a fine-tuned language model 
designed for advanced reasoning and instruction-following tasks. It 
leverages the Qwen2.5-7B base model and has been fine-tuned on the chain
 of thought reasoning datasets, focusing on chain-of-thought (CoT) 
reasoning for problems. This model is optimized for tasks requiring 
logical reasoning, detailed explanations, and multi-step 
problem-solving, making it ideal for applications such as 
instruction-following, text generation, and complex reasoning tasks.

		Quickstart with Transformers
	
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/QwQ-LCoT2-7B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "How many r in strawberry."
messages = [
    {"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

		Intended Use

The QwQ-LCoT2-7B-Instruct model is designed for advanced reasoning 
and instruction-following tasks, with specific applications including:  

Instruction Following: Providing detailed and step-by-step guidance for a wide range of user queries.
Logical Reasoning: Solving problems requiring multi-step thought processes, such as math problems or complex logic-based scenarios.
Text Generation: Crafting coherent, contextually relevant, and well-structured text in response to prompts.
Problem-Solving: Analyzing and addressing tasks 
that require chain-of-thought (CoT) reasoning, making it ideal for 
education, tutoring, and technical support.
Knowledge Enhancement: Leveraging reasoning datasets to offer deeper insights and explanations for a wide variety of topics.

		Limitations
	
Data Bias: As the model is fine-tuned on specific datasets, its outputs may reflect inherent biases from the training data.
Context Limitation: Performance may degrade for 
tasks requiring knowledge or reasoning that significantly exceeds the 
model's pretraining or fine-tuning context.
Complexity Ceiling: While optimized for multi-step 
reasoning, exceedingly complex or abstract problems may result in 
incomplete or incorrect outputs.
Dependency on Prompt Quality: The quality and specificity of the user prompt heavily influence the model's responses.
Non-Factual Outputs: Despite being fine-tuned for 
reasoning, the model can still generate hallucinated or factually 
inaccurate content, particularly for niche or unverified topics.
Computational Requirements: Running the model 
effectively requires significant computational resources, particularly 
when generating long sequences or handling high-concurrency workloads.

---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo Triangle104/QwQ-LCoT2-7B-Instruct-Q5_K_S-GGUF --hf-file qwq-lcot2-7b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo Triangle104/QwQ-LCoT2-7B-Instruct-Q5_K_S-GGUF --hf-file qwq-lcot2-7b-instruct-q5_k_s.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/QwQ-LCoT2-7B-Instruct-Q5_K_S-GGUF --hf-file qwq-lcot2-7b-instruct-q5_k_s.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo Triangle104/QwQ-LCoT2-7B-Instruct-Q5_K_S-GGUF --hf-file qwq-lcot2-7b-instruct-q5_k_s.gguf -c 2048
```