Decoupled Latent Diffusion Model for Enhancing Image Generation
Citations

WEB OF SCIENCE

0
Citations

SCOPUS

0

초록

Latent Diffusion Models have emerged as an efficient alternative to conventional diffusion approaches by compressing high-dimensional images into a lower-dimensional latent space using a Variational Autoencoder (VAE) and performing diffusion in that space. In standard Latent Diffusion Model (LDM), the latent code is formed by sampling from a Gaussian distribution (i.e., combining both the mean and the standard deviation), which helps regularize the latent space but appears to contribute little beyond the deterministic component. Motivated by recent empirical observations that the decoder relies primarily on the latent mean, our work reexamines this paradigm and proposes a decoupled latent diffusion model that focuses on a simplified latent representation. Specifically, we compare three configurations: (i) the standard latent code, (ii) a concatenated representation that explicitly preserves both mean and variance, and (iii) a deterministic mean-only representation. Our extensive experiments on multiple benchmark datasets demonstrate that, when compared to the standard approach, the mean-only configuration not only maintains but in many cases improves synthesis quality by producing sharper and more coherent images while reducing unnecessary noise. These findings suggest that a simplified, deterministic latent representation can yield more stable and efficient generative models, challenging the conventional reliance on latent sampling in diffusion-based image synthesis. © 2013 IEEE.

키워드

Denoising Diffusion ModelImage GenerationLatent Representation
제목
Decoupled Latent Diffusion Model for Enhancing Image Generation
저자
Choi, Hyun-TaeNakamura, KensukeHong, Byung-Woo
DOI
10.1109/ACCESS.2025.3592163
발행일
2025
유형
Article
저널명
IEEE Access
13
페이지
130505 ~ 130516

파일 다운로드