RewardSDS: Aligning Score Distillation via Reward-Weighted Sampling
Abstract
Score Distillation Sampling (SDS) has emerged as an effective technique for leveraging 2D diffusion priors for tasks such as text-to-3D generation. While powerful, SDS struggles with achieving fine-grained alignment to user intent. To overcome this, we introduce RewardSDS, a novel approach that weights noise samples based on alignment scores from a reward model, producing a weighted SDS loss. This loss prioritizes gradients from noise samples that yield aligned high-reward output. Our approach is broadly applicable and can extend SDS-based methods. In particular, we demonstrate its applicability to Variational Score Distillation (VSD) by introducing RewardVSD. We evaluate RewardSDS and RewardVSD on text-to-image, 2D editing, and text-to-3D generation tasks, showing significant improvements over SDS and VSD on a diverse set of metrics measuring generation quality and alignment to desired reward models, enabling state-of-the-art performance. Project page is available at https://itaychachy. github.io/reward-sds/.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Adding Additional Control to One-Step Diffusion with Joint Distribution Matching (2025)
- Identity-preserving Distillation Sampling by Fixed-Point Iterator (2025)
- Unified Reward Model for Multimodal Understanding and Generation (2025)
- Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising Trajectory Sharpening (2025)
- ROCM: RLHF on consistency models (2025)
- Learning to Sample Effective and Diverse Prompts for Text-to-Image Generation (2025)
- Aligning Text to Image in Diffusion Models is Easier Than You Think (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper