Abstract
Diffusion models have emerged as a promising alternative to autoregressive models in modeling discrete categorical data. Yet diffusion models that directly work on discrete data space do not fully exploit the power of iterative refinement, as the signals are lost during the transition between discrete states. Existing continuous diffusion models for discrete data have limited performance compared to discrete approaches, and the unclear link between them restricts the development of diffusion models for discrete data. In this work, we propose a continuous diffusion model for language modeling that incorporates the geometry of the underlying categorical distribution. We establish a connection between the discrete diffusion and continuous flow on the statistical manifold, and building on the analogy, we introduce a simple design for the diffusion process that generalizes previous discrete diffusion models. We further propose a simulation-free training framework based on radial symmetry and a simple technique to address the high dimensionality of the manifold. Comprehensive experiments on language modeling benchmarks and other modalities show that our method outperforms existing discrete diffusion models and approaches the performance of autoregressive models. Codes available at https://github.com/harryjo97/RDLM{https://github.com/harryjo97/RDLM}.
Community
TL;DR We introduce Riemannian Diffusion Language Model (RDLM), a continuous diffusion model for
language and discrete data that outperforms discrete diffusion models on language modeling tasks as well as other modalities.
Wow, impressive work!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Large Language Diffusion Models (2025)
- Theoretical Benefit and Limitation of Diffusion Language Model (2025)
- Fine-Tuning Discrete Diffusion Models with Policy Gradient Methods (2025)
- Graph Representation Learning with Diffusion Generative Models (2025)
- Non-Markovian Discrete Diffusion with Causal Language Models (2025)
- A General Framework for Inference-time Scaling and Steering of Diffusion Models (2025)
- DiTAR: Diffusion Transformer Autoregressive Modeling for Speech Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
thats so elegant oh my word...
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper