Overview

InternLM developed and released the InternLM3-8B-Instruct, an 8-billion parameter instruction-tuned language model designed for general-purpose usage and advanced reasoning tasks. The model delivers state-of-the-art performance on reasoning and knowledge-intensive tasks, outperforming other models like Llama3.1-8B and Qwen2.5-7B. Trained on 4 trillion high-quality tokens, InternLM3 achieves exceptional efficiency, reducing training costs by over 75% compared to other models of similar scale.

The model features dual operational modes: a deep thinking mode for solving complex reasoning tasks through long chain-of-thought processes and a normal response mode for fluent and interactive user experiences. These capabilities make InternLM3-8B-Instruct ideal for applications in conversational AI, advanced reasoning, and general-purpose language understanding.

Variants

No Variant Cortex CLI command
1 gguf cortex run internlm3-8b-it

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexso/internlm3-8b-it
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run internlm3-8b-it
    

Credits

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.