Monte Carlo Diffusion for Generalizable Learning-Based RANSAC
Abstract
Random Sample Consensus (RANSAC) is a fundamental approach for robustly estimating parametric models from noisy data. Existing learning-based RANSAC methods utilize deep learning to enhance the robustness of RANSAC against outliers. However, these approaches are trained and tested on the data generated by the same algorithms, leading to limited generalization to out-of-distribution data during inference. Therefore, in this paper, we introduce a novel diffusion-based paradigm that progressively injects noise into ground-truth data, simulating the noisy conditions for training learning-based RANSAC. To enhance data diversity, we incorporate Monte Carlo sampling into the diffusion paradigm, approximating diverse data distributions by introducing different types of randomness at multiple stages. We evaluate our approach in the context of feature matching through comprehensive experiments on the ScanNet and MegaDepth datasets. The experimental results demonstrate that our Monte Carlo diffusion mechanism significantly improves the generalization ability of learning-based RANSAC. We also develop extensive ablation studies that highlight the effectiveness of key components in our framework.
Community
Current learning-based RANSAC methods struggle with out-of-distribution generalization. This paper introduces Monte Carlo Diffusion, a novel approach that generates diverse noisy data, enhancing the adaptability of any learning-based RANSAC model. This plug-and-play module significantly improves robustness in real-world applications.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Zero-shot Depth Completion via Test-time Alignment with Affine-invariant Depth Prior (2025)
- Stochastic Forward-Backward Deconvolution: Training Diffusion Models with Finite Noisy Datasets (2025)
- Denoising Score Distillation: From Noisy Diffusion Pretraining to One-Step High-Quality Generation (2025)
- Boosting Diffusion-Based Text Image Super-Resolution Model Towards Generalized Real-World Scenarios (2025)
- Reconciling Stochastic and Deterministic Strategies for Zero-shot Image Restoration using Diffusion Model in Dual (2025)
- Outlier Synthesis via Hamiltonian Monte Carlo for Out-of-Distribution Detection (2025)
- Boosting Diffusion Guidance via Learning Degradation-Aware Models for Blind Super Resolution (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper