Papers
arxiv:2502.14044

Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data

Published on Feb 19
· Submitted by YuchengShi on Feb 21
Authors:
,
,

Abstract

Large multimodal models (LMMs) have shown impressive capabilities in a wide range of visual tasks. However, they often struggle with fine-grained visual reasoning, failing to identify domain-specific objectives and provide justifiable explanations for their predictions. To address this, we propose a novel visual rejection sampling framework to improve the cognition and explainability of LMMs using self-synthesized data. Specifically, visual fine-tuning requires images, queries, and target answers. Our approach begins by synthesizing interpretable answers that include human-verifiable visual features. These features are based on expert-defined concepts, carefully selected based on their alignment with the image content. After each round of fine-tuning, we apply a reward model-free filtering mechanism to select the highest-quality interpretable answers for the next round of tuning. This iterative process of data synthesis and fine-tuning progressively improves the model's ability to generate accurate and reasonable explanations. Experimental results demonstrate the effectiveness of our method in improving both the accuracy and explainability of specialized visual classification tasks.

Community

Paper author Paper submitter

🏔️ The Challenge:
Modern AI systems excel at general tasks but often fall short when applied to specialized fields such as medical imaging, plant disease detection, or fine-grained species classification. Training with only image-label pairs tends to compromise the model’s ability to follow instructions and explain its decisions—an issue that becomes critical when precision and accountability are required.

đź“Ł Our Solution:
We introduce a novel framework where the model self-generates interpretable visual explanations through an iterative fine-tuning process. By leveraging self-synthesized data, our approach automatically extracts key visual features and produces expert-level, image-specific explanations. This method overcomes the limitations of conventional labeling (which often lacks detailed interpretability) while preserving the model’s general instruction-following capabilities.

🔥 Why It Matters:
Understanding not only what the model predicts but also why it makes those predictions is essential for building trust in AI systems, especially in high-stakes domains. Our work bridges the gap between performance and interpretability, enabling models to serve as true domain experts with transparent decision-making—paving the way for safer and more reliable AI applications.

Paper author Paper submitter

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 6

Browse 6 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.14044 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.14044 in a Space README.md to link it from this page.

Collections including this paper 1