Inpainting the Degraded Area with Diffusion Model
Citations

WEB OF SCIENCE

0
Citations

SCOPUS

0

초록

In recent years, diffusion models have gained prominence for their ability to generate high-quality images by reversing a stochastic diffusion process. Denoising Diffusion Probabilistic Models (DDPMs) have shown state-of-the-art performance in image synthesis, but their focus on removing Gaussian noise limits their application in cases where only partial image information is available. In this paper, we propose a novel adaptation to the DDPM framework that replaces the conventional denoising process with an inpainting process. Our approach progressively removes random pixels, setting them to zero at each timestep instead of adding noise. The reverse process reconstructs the original image by filling in the missing pixels. To demonstrate the feasibility of this method, we conducted a prototype experiment using a subset of the CelebA-HQ dataset, training the model on 128 images. Initial results indicate that our method is capable of reconstructing images effectively, even with limited data, suggesting potential for future work in image restoration and enhancement tasks. However, challenge such as low diversity during sampling process remains, which will be addressed in subsequent research. © 2025 IEEE.

키워드

deep learningimage generation
제목
Inpainting the Degraded Area with Diffusion Model
저자
Choi, Hyun-TaeHong, Byung-Woo
DOI
10.1109/ICCE63647.2025.10929902
발행일
2025
유형
Conference paper
저널명
Digest of Technical Papers - IEEE International Conference on Consumer Electronics