|
--- |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
tags: |
|
- cortex.cpp |
|
- featured |
|
--- |
|
|
|
## Overview |
|
|
|
**QwQ** is the reasoning model of the **Qwen** series. Unlike conventional instruction-tuned models, **QwQ** is designed to think and reason, achieving significantly enhanced performance in downstream tasks, especially challenging problem-solving scenarios. |
|
|
|
**QwQ-32B** is the **medium-sized** reasoning model in the QwQ family, capable of **competitive performance** against state-of-the-art reasoning models, such as **DeepSeek-R1** and **o1-mini**. It is optimized for tasks requiring logical deduction, multi-step reasoning, and advanced comprehension. |
|
|
|
The model is well-suited for **AI research, automated theorem proving, advanced dialogue systems, and high-level decision-making applications**. |
|
|
|
## Variants |
|
|
|
| No | Variant | Cortex CLI command | |
|
| --- | --- | --- | |
|
| 1 | [QwQ-32B](https://huggingface.co/cortexso/qwen-qwq/tree/main) | `cortex run qwen-qwq:32b` | |
|
|
|
## Use it with Jan (UI) |
|
|
|
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart) |
|
2. Use in Jan model Hub: |
|
```bash |
|
cortexso/qwen-qwq |
|
``` |
|
|
|
## Use it with Cortex (CLI) |
|
|
|
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart) |
|
2. Run the model with command: |
|
```bash |
|
cortex run qwen-qwq |
|
``` |
|
|
|
## Credits |
|
|
|
- **Author:** Qwen Team |
|
- **Converter:** [Homebrew](https://www.homebrew.ltd/) |
|
- **Original License:** [License](https://choosealicense.com/licenses/apache-2.0/) |
|
- **Paper:** [Introducing QwQ-32B: The Medium-Sized Reasoning Model](https://qwenlm.github.io/blog/qwq-32b/) |