Papers
arxiv:2503.12605

Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey

Published on Mar 16
· Submitted by Gh0stAR on Mar 18
Authors:
,
,

Abstract

By extending the advantage of chain-of-thought (CoT) reasoning in human-like step-by-step processes to multimodal contexts, multimodal CoT (MCoT) reasoning has recently garnered significant research attention, especially in the integration with multimodal large language models (MLLMs). Existing MCoT studies design various methodologies and innovative reasoning paradigms to address the unique challenges of image, video, speech, audio, 3D, and structured data across different modalities, achieving extensive success in applications such as robotics, healthcare, autonomous driving, and multimodal generation. However, MCoT still presents distinct challenges and opportunities that require further focus to ensure consistent thriving in this field, where, unfortunately, an up-to-date review of this domain is lacking. To bridge this gap, we present the first systematic survey of MCoT reasoning, elucidating the relevant foundational concepts and definitions. We offer a comprehensive taxonomy and an in-depth analysis of current methodologies from diverse perspectives across various application scenarios. Furthermore, we provide insights into existing challenges and future research directions, aiming to foster innovation toward multimodal AGI.

Community

Paper author Paper submitter
edited about 16 hours ago

🚀 Multimodal CoT Reasoning: A Comprehensive Survey

Key Contributions:
1️⃣ First-ever MCoT survey: Fills the gap with a systematic, in-depth review.
2️⃣ Detailed taxonomy: Classifies MCoT research directions comprehensively.
3️⃣ Six methodologies: Covers Rationale Construction, Multimodal Thought, Test-Time Scaling, and more.
4️⃣ 12 future directions: General reasoning, dynamic chain optimization, hallucination, safety, and beyond.
5️⃣ Open-source resources: Datasets and benchmarks to empower the community!

Tech Highlights:
✅ Multimodal reasoning: Image, video, audio, 3D, charts.
✅ MCoT paradigms: From Chain to Tree to Graph.
✅ Applications: Autonomous driving, embodied AI, healthcare, multimodal generation, and more.

Paper: https://arxiv.org/abs/2503.12605
GitHub: https://github.com/yaotingwangofficial/Awesome-MCoT

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.12605 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.12605 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.12605 in a Space README.md to link it from this page.

Collections including this paper 7