상세 보기
- Kim, Sojeong;
- Sohn, Bong-Soo;
- Lee, Jaesung
WEB OF SCIENCE
0SCOPUS
0초록
Diffusion-based style transfer has achieved high-quality results and is widely used in creative industries. Although simply preserving source image features is often more effective for producing a feasible output than fully converting them into the reference style, existing methods frequently generate distorted output images because it is structurally prohibited in most conventional approaches. This issue is prevalent in novel source-reference pairs where there are no suitable style attributes in the reference image for the source image or vice versa. To address this issue, we propose a novel blending strategy that enables the diffusion model to use the source image directly if it leads to a more suitable output. By integrating blended embeddings of visual and textual style information from the source and reference images, our method maintains structural consistency while achieving a harmonious output image. Experiments demonstrate that our approach enhances style transfer fidelity and prevents unintended distortions, particularly in unexpected source-reference pairs.
키워드
- 제목
- Blended embedding guided style transfer in inversion-based diffusion for creatively-matched source-reference pairs
- 저자
- Kim, Sojeong; Sohn, Bong-Soo; Lee, Jaesung
- 발행일
- 2026-04
- 유형
- Article
- 저널명
- Neurocomputing
- 권
- 675