3D-aware virtual try-on using only 2D inputs☆
Citations

WEB OF SCIENCE

0
Citations

SCOPUS

0

초록

We present 3DFit, which is a novel 3D-aware virtual try-on framework that synthesizes realistic try-on images using only 2D inputs. Unlike previous methods that either ignore 3D body geometry or rely entirely on 3D clothing models, 3DFit utilizes 3D human meshes estimated from 2D images and adaptively transforms 3D clothing templates guided by 2D clothing images. We further introduce a warping strategy that integrates 3D information into 2D clothing images using a set of pre-designed 3D templates, enabling efficient adaptation to various body shapes and poses. As a result, our method supports accurate and personalized virtual try-on experiences. Experimental results on the VITON-HD dataset demonstrate that 3DFit outperforms existing methods in preserving garment structure and maintaining high visual quality across a wide range of body types and poses.

키워드

Virtual try-on3D-awareAssistive computer vision
제목
3D-aware virtual try-on using only 2D inputs☆
저자
Lee, JaeyoonJung, HojoonChoi, Jongwon
DOI
10.1016/j.cviu.2026.104661
발행일
2026-01
유형
Article
저널명
Computer Vision and Image Understanding
264