Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey
Abstract
By extending the advantage of chain-of-thought (CoT) reasoning in human-like step-by-step processes to multimodal contexts, multimodal CoT (MCoT) reasoning has recently garnered significant research attention, especially in the integration with multimodal large language models (MLLMs). Existing MCoT studies design various methodologies and innovative reasoning paradigms to address the unique challenges of image, video, speech, audio, 3D, and structured data across different modalities, achieving extensive success in applications such as robotics, healthcare, autonomous driving, and multimodal generation. However, MCoT still presents distinct challenges and opportunities that require further focus to ensure consistent thriving in this field, where, unfortunately, an up-to-date review of this domain is lacking. To bridge this gap, we present the first systematic survey of MCoT reasoning, elucidating the relevant foundational concepts and definitions. We offer a comprehensive taxonomy and an in-depth analysis of current methodologies from diverse perspectives across various application scenarios. Furthermore, we provide insights into existing challenges and future research directions, aiming to foster innovation toward multimodal AGI.
Community
🚀 Multimodal CoT Reasoning: A Comprehensive Survey
Key Contributions:
1️⃣ First-ever MCoT survey: Fills the gap with a systematic, in-depth review.
2️⃣ Detailed taxonomy: Classifies MCoT research directions comprehensively.
3️⃣ Six methodologies: Covers Rationale Construction, Multimodal Thought, Test-Time Scaling, and more.
4️⃣ 12 future directions: General reasoning, dynamic chain optimization, hallucination, safety, and beyond.
5️⃣ Open-source resources: Datasets and benchmarks to empower the community!
Tech Highlights:
✅ Multimodal reasoning: Image, video, audio, 3D, charts.
✅ MCoT paradigms: From Chain to Tree to Graph.
✅ Applications: Autonomous driving, embodied AI, healthcare, multimodal generation, and more.
Paper: https://arxiv.org/abs/2503.12605
GitHub: https://github.com/yaotingwangofficial/Awesome-MCoT
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Integrating Chain-of-Thought for Multimodal Alignment: A Study on 3D Vision-Language Learning (2025)
- Ask in Any Modality: A Comprehensive Survey on Multimodal Retrieval-Augmented Generation (2025)
- Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching (2025)
- Can Atomic Step Decomposition Enhance the Self-structured Reasoning of Multimodal Large Models? (2025)
- Boosting Multimodal Reasoning with MCTS-Automated Structured Thinking (2025)
- R1-Onevision: Advancing Generalized Multimodal Reasoning through Cross-Modal Formalization (2025)
- Towards Reasoning Era: A Survey of Long Chain-of-Thought for Reasoning Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper