상세 보기
- Hu, Shu;
- Lin, Li;
- Desai, Shail;
- Pawar, Aditya;
- Lin, Guangyu;
- ... Yoo, YoungJoon;
- ... Kwon, Junseok;
- 외 48명
WEB OF SCIENCE
0SCOPUS
0초록
AI-generated facial synthesis (DeepFakes) has rapidly progressed through advanced generative models, producing realistic manipulated media that is increasingly difficult to distinguish from authentic content. Misuse of these technologies threatens public trust and democratic stability, particularly in sensitive contexts. Beyond detection accuracy, responsible forensic analysis demands fair and ethical deployment, yet recent studies reveal demographic performance disparities that existing methods have not adequately addressed, and fairness approaches often fail to generalize under distribution shifts. To address this issue, we organized the first competition focused on fairness in AI-generated face detection at NeurIPS 2025, connecting fairness research with real-world DeepFake challenges. The competition attracted substantial global participation, including more than 64 registered teams and 158 participants from 63 organizations across 20 countries, with 11 teams surpassing the baseline. Analysis of the submitted methods reveals that the most effective approaches combined data-centric design, robust representation learning, and model-level diversification rather than relying on fairness constraints alone. In particular, the top-ranked solution achieved strong fairness improvements by integrating careful data curation, a mixture-of-experts architecture, and test-time augmentation, demonstrating that fairness generalization can be improved without explicitly optimizing demographic-specific losses. Other competitive methods explored complementary directions, including foundation-model-based feature extraction, dual-branch fusion of global and local cues, ensemble learning, and post hoc calibration, each exposing distinct trade-offs among fairness, utility, and deployability. Our findings highlight that fairness metrics can be significantly improved through strategic system design, but also reveal limitations of current evaluation protocols and the risk of trivial solutions under fixed thresholds. Overall, this competition provides concrete empirical evidence and methodological insights for building more fair, robust, and trustworthy DeepFake detection systems, and offers guidance for future benchmarks, evaluation metrics, and responsible deployment practices. The competition website is available at: https://sites.google.com/view/aifacedetection/.
키워드
- 제목
- The Competition of Fairness in AI-generated Face Detection: Methods and Results
- 저자
- Hu, Shu; Lin, Li; Desai, Shail; Pawar, Aditya; Lin, Guangyu; Wang, Xin; Schiff, Daniel S.; Mohanty, Sachi Nandan; Ofman, Ryan; Bejtic, Narcis; Gillham, Jon; Zhang, Wenbin; Wu, Baoyuan; Canton, Cristian; Liu, Xiaoming; Verdoliva, Luisa; Lyu, Siwei; Tang, Yongwei; Wu, Zhiqiang; Seow, Jiawen; Alaverdyan, Zara; Baron, Anne-Flore; Bozonnet, Simon; Bruveris, Martins; Gietema, Jochem; Innocenti, Lucia; Ivanova, Lisa; Koch, Olivier; Ni, Harry; Pajot, Arthur; Sabathe, Romain; Gu, Fengming; Long, Xingming; Zhang, Jie; Ge, Wenqing; Cao, Xiangkui; Min, Yuecong; Liu, Yingjie; Guo, Zonghui; Shan, Shiguang; Park, Jinhee; Kim, Minjun; Park, Ahyeon; Kim, Guisik; Kim, Taewoo; Yoo, YoungJoon; Kwon, Junseok; Li, Zhaoda; Tang, Mengyun; Huang, Leyang; Dura, Bogdan; Balmuş, Sebastian; Su, Fang-Yi; Lee, Tsung-Hua; Kao, Ting-Wan
- 발행일
- 2026-04
- 유형
- Article; Early Access
- 저널명
- MACHINE INTELLIGENCE RESEARCH