Overview
Prime Intellect released INTELLECT-2, a 32 billion parameter large language model (LLM) trained through distributed reinforcement learning on globally donated GPU resources. Built on the Qwen2 architecture and fine-tuned with the prime-rl framework, INTELLECT-2 demonstrates strong performance in math, coding, and logical reasoning.
This model leverages GRPO (Generalized Reinforcement Policy Optimization) over verifiable rewards, introducing asynchronous distributed RL training with enhanced stability techniques. While its primary focus was on verifiable mathematical and coding tasks, it remains compatible with general-purpose text generation tasks.
Variants
INTELLECT-2
No | Variant | Branch | Cortex CLI command |
---|---|---|---|
1 | INTELLECT-2 (32B) | 32b | cortex run intellect-2:32b |
Each branch includes multiple GGUF quantized versions, optimized for various hardware configurations:
- INTELLECT-2-32B: q2_k, q3_k_l, q3_k_m, q3_k_s, q4_k_m, q4_k_s, q5_k_m, q5_k_s, q6_k, q8_0
Use it with Jan (UI)
- Install Jan using Quickstart
- Use in Jan model Hub:
cortexso/intellect-2
Use it with Cortex (CLI)
- Install Cortex using Quickstart
- Run the model with command:
cortex run intellect-2
Credits
- Author: Prime Intellect
- Converter: Menlo Research
- Original License: Apache-2.0
- Paper: Intellect 2 Technical Report
- Downloads last month
- 292
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit