File size: 1,614 Bytes
bbe8048 6feef4f bbe8048 8e256fe bbe8048 edf4fe3 bbe8048 6feef4f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
license: mit
pipeline_tag: text-generation
tags:
- cortex.cpp
---
## Overview
**DeepSeek** developed and released the [DeepSeek R1 Distill Qwen 7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) model, a distilled version of the Qwen 7B language model. This version is fine-tuned for high-performance text generation and optimized for dialogue and information-seeking tasks, providing even greater capabilities with its larger size compared to the 7B variant.
The model is designed for applications in customer support, conversational AI, and research, focusing on delivering accurate, helpful, and safe outputs while maintaining efficiency.
## Variants
| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [Deepseek-r1-distill-qwen-7b-7b](https://huggingface.co/cortexso/deepseek-r1-distill-qwen-7b/tree/7b) | `cortex run deepseek-r1-distill-qwen-7b:7b` |
## Use it with Jan (UI)
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
```bash
cortexso/deepseek-r1-distill-qwen-7b
```
## Use it with Cortex (CLI)
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
```bash
cortex run deepseek-r1-distill-qwen-7b
```
## Credits
- **Author:** DeepSeek
- **Converter:** [Homebrew](https://www.homebrew.ltd/)
- **Original License:** [License](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B#7-license)
- **Papers:** [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/html/2501.12948v1) |