Overview

DeepSeek developed and released the DeepSeek R1 Distill Llama 70B model, a distilled version of the Llama 70B language model. This model represents the pinnacle of the DeepSeek R1 Distill series, designed for exceptional performance in text generation, dialogue tasks, and advanced reasoning, offering unparalleled capabilities for large-scale AI applications.

The model is ideal for enterprise-grade applications, research, conversational AI, and large-scale knowledge systems, providing top-tier accuracy, safety, and efficiency.

Variants

No Variant Cortex CLI command
1 gguf cortex run deepseek-r1-distill-llama-70b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexso/deepseek-r1-distill-llama-70b
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run deepseek-r1-distill-llama-70b
    

Credits

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Collection including cortexso/deepseek-r1-distill-llama-70b