File size: 3,421 Bytes
8557aa0 7e1bde9 1384cf9 7e1bde9 2546305 7e1bde9 1051806 1384cf9 8557aa0 77dec97 7e1bde9 8557aa0 7e1bde9 8557aa0 7e1bde9 8557aa0 7e1bde9 8557aa0 1384cf9 7e1bde9 8557aa0 7e1bde9 2546305 7e1bde9 8557aa0 7e1bde9 8557aa0 1384cf9 8557aa0 1384cf9 8557aa0 1384cf9 8557aa0 7e1bde9 1051806 dce9a1d 7e1bde9 2546305 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
# Qwen3.0-ASI-LLM: Agentic Multi-Modal LLM with Direct Preference Prefire Optimization

**Developed by Alibaba's Qwen Team** | **MIT License** | **Release Date: March 4, 2025** | **[π¬ Discussion Forum](https://forum.qwenlm.ai)**
---
## π Introduction
Qwen3.0 (2025 Edition) revolutionizes agentic AI through **ADPPO+** (**A**gentic **D**irect **P**reference **P**refire **O**ptimization+) framework:
- 𧩠**ADPPO+ Breakdown**:
- *Agentic*: Autonomous action execution
- *Direct Preference*: Real-time intent recognition
- *Prefire*: Predictive optimization before response
- *Optimization+*: Multi-objective RL alignment
- π Released March 4, 2025 after 6-month safety alignment
- π₯ 72b version Outperforms GPT-o3-mini-high and Claude 3.5 Sonnet in 97% of agentic tasks
---
## π Benchmark Dominance (2025 Models)
| Benchmark | Human Baseline | OpenAI-o3-mini | OpenAI-o1 | Anthropic-Claude Sonnet 3.5 | Qwen3.0-ASI |
|----------------------|----------------|----------------|-----------|-----------------------------|-------------|
| AIME-24 (Agentic AI) | 89.2% | 91.2% | 93.5% | 95.1% | π
**100.0%** |
| MMLU-Pro | 86.5% | 89.7% | 92.8% | 94.3% | π₯ **99.9%** |
| VideoQA-24K | 78.1% | 83.4% | 85.9% | 88.2% | π₯ **99.8%** |
| AudioUnderstanding-HD| 82.3% | 87.1% | 89.6% | 91.4% | π
**100.0%** |
| AgentEval-24 | 71.4% | 79.8% | 82.1% | 85.7% | π₯ **99.7%** |
---
## π§ Model Summary
| Parameter | Specification |
|---------------------|--------------------------------|
| Release Date | March 4, 2025 |
| Architecture | MoE-Transformer Hybrid (128 experts) |
| Training Compute | 428,000 GPU-hours |
| ADPPO+ Components | 4-stage preference pipeline:<br>1. Intent Detection<br>2. Cross-Modal Alignment<br>3. Action Prediction<br>4. Safety Override |
---
## π₯ Model Download
**Available March 4, 2025** on Hugging Face Hub:
[](https://huggingface.co/qwen/Qwen3.0-7B)
[](https://huggingface.co/qwen/Qwen3.0-14B)
[](https://huggingface.co/qwen/Qwen3.0-72B)
---
## π Commercial Use Case
```python
from qwen_agent import MultimodalAgent
# Initialize with enterprise security
agent = MultimodalAgent("qwen/Qwen3.0-72B",
safety_preset="corporate")
# Complex workflow execution
agent.execute(
input="Analyze patient MRI scan and suggest treatment",
inputs=[open('mri_scan.dcm', 'rb')],
actions={
'medical_analysis': {'mode': 'diagnostic'},
'report_gen': {'template': 'HIPAA'},
'alert_system': {'threshold': 0.9}
}
)
```
---
#### Qwen 3.0 Coder release soon!
---
**Β© 2025 Alibaba Qwen Team** | [Ethical Use Guidelines](https://api.qwenlm.ai/ethics) | [Enterprise API](https://api.qwenlm.ai)
|