Confidence Controls Deep Metric Learning
Citations

WEB OF SCIENCE

0
Citations

SCOPUS

0

초록

This paper presents a novel perspective on enhancing the performance of classification-based Deep Metric Learning (DML). While classification-based DML has seen substantial progress through various regularization techniques, conventional normalization and scaling methods often lead to premature loss saturation and vanishing gradients caused by model overconfidence. To address this issue, we introduce Confidence Control (CC), a new regularization method that prevents this saturation by actively managing prediction confidence. By ensuring sufficient gradient magnitudes throughout training, CC encourages samples to align more strongly with their corresponding class weights. This results in improved feature invariance and tighter intra-class clustering in the embedding space. We propose two implementations of CC: (1) NaiveCC, a direct method that continuously maintains meaningful loss magnitudes by explicitly reducing confidence via a detached logit term; and (2) Evidence-based CC (EVDCC), which addresses the risk of excessive variance reduction (i.e., feature collapse) by imposing a geometric constraint based on eigenvector-based augmentation. Experimental results demonstrate that CC is easily integrable into existing DML frameworks and consistently improves performance, achieving a 2–5% gain in Recall@K.

키워드

Deep metric learningConfidence controlClassification
제목
Confidence Controls Deep Metric Learning
저자
Park, JinheeYoo, Hee BinZhang, Byoung-TakKwon, Junseok
DOI
10.1007/s10994-026-07032-y
발행일
2026-03
유형
Article
저널명
Machine Learning
115
4