File size: 1,625 Bytes
0762ca4 b9bd373 0762ca4 767d1ad 0762ca4 15c639a 0762ca4 14cbd3c 15c639a 0762ca4 7e9788b ef90a6f 15c639a 0762ca4 15c639a ef90a6f 15c639a 0762ca4 15c639a 0762ca4 15c639a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
license: mit
pipeline_tag: text-generation
tags:
- cortex.cpp
---
## Overview
**DeepSeek** developed and released the [DeepSeek R1 Distill Qwen 1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) model, a distilled version of the Qwen 1.5B language model. It is fine-tuned for high-performance text generation and optimized for dialogue and information-seeking tasks. This model achieves a balance of efficiency and accuracy while maintaining a smaller footprint compared to the original Qwen 1.5B.
The model is designed for applications in customer support, conversational AI, and research, prioritizing both helpfulness and safety.
## Variants
| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [Deepseek-r1-distill-qwen-1.5b-1.5b](https://huggingface.co/cortexso/deepseek-r1-distill-qwen-1.5b/tree/1.5b) | `cortex run deepseek-r1-distill-qwen-1.5b:1.5b` |
## Use it with Jan (UI)
1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
```bash
cortexso/deepseek-r1-distill-qwen-1.5b
```
## Use it with Cortex (CLI)
1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
```bash
cortex run deepseek-r1-distill-qwen-1.5b
```
## Credits
- **Author:** DeepSeek
- **Converter:** [Homebrew](https://www.homebrew.ltd/)
- **Original License:** [License](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B#7-license)
- **Papers:** [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/html/2501.12948v1) |