Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey
Abstract
By extending the advantage of chain-of-thought (CoT) reasoning in human-like step-by-step processes to multimodal contexts, multimodal CoT (MCoT) reasoning has recently garnered significant research attention, especially in the integration with multimodal large language models (MLLMs). Existing MCoT studies design various methodologies and innovative reasoning paradigms to address the unique challenges of image, video, speech, audio, 3D, and structured data across different modalities, achieving extensive success in applications such as robotics, healthcare, autonomous driving, and multimodal generation. However, MCoT still presents distinct challenges and opportunities that require further focus to ensure consistent thriving in this field, where, unfortunately, an up-to-date review of this domain is lacking. To bridge this gap, we present the first systematic survey of MCoT reasoning, elucidating the relevant foundational concepts and definitions. We offer a comprehensive taxonomy and an in-depth analysis of current methodologies from diverse perspectives across various application scenarios. Furthermore, we provide insights into existing challenges and future research directions, aiming to foster innovation toward multimodal AGI.
Community
🚀 Multimodal CoT Reasoning: A Comprehensive Survey
Key Contributions:
1️⃣ First-ever MCoT survey: Fills the gap with a systematic, in-depth review.
2️⃣ Detailed taxonomy: Classifies MCoT research directions comprehensively.
3️⃣ Six methodologies: Covers Rationale Construction, Multimodal Thought, Test-Time Scaling, and more.
4️⃣ 12 future directions: General reasoning, dynamic chain optimization, hallucination, safety, and beyond.
5️⃣ Open-source resources: Datasets and benchmarks to empower the community!
Tech Highlights:
✅ Multimodal reasoning: Image, video, audio, 3D, charts.
✅ MCoT paradigms: From Chain to Tree to Graph.
✅ Applications: Autonomous driving, embodied AI, healthcare, multimodal generation, and more.
Paper: https://arxiv.org/abs/2503.12605
GitHub: https://github.com/yaotingwangofficial/Awesome-MCoT
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper