Diffusion
Diffusion Language Models (DLMs) explores the potential of diffusion-based models to exceed the generative quality of traditional autoregressive language models, while overcoming their current limitations in training and inference efficiency. Although early results indicate that DLMs can produce more coherent and diverse outputs than autoregressive models, these benefits have yet to be demonstrated at scale—possibly due to slower training and expensive inference. Our DLM work builds upon our recent image diffusion work where we studied the effect of the forward process in the Fourier domain, allowing us to precisely control image reconstruction across frequencies by considering the signal-to-noise of image spatial statistics. By considering similar ordered statistics in language, we hope to improve training and inference regimes in DLMs.
Learn more:
A Fourier Space Perspective on Diffusion Models
Publication | May 2025