Cross-domain Beauty Item Retrieval via Unsupervised Embedding Learning

被引:7
|
作者
Lin, Zehang [1 ]
Xie, Haoran [2 ]
Kang, Peipei [3 ]
Yang, Zhenguo [3 ]
Liu, Wenyin [3 ]
Li, Qing [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[2] Educ Univ Hong Kong, Dept Comp, Hong Kong, Peoples R China
[3] Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-domain image retrieval; UEL; Query expansion;
D O I
10.1145/3343031.3356055
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Cross-domain image retrieval is always encountering insufficient labelled data in real world. In this paper, we propose unsupervised embedding learning (UEL) for cross-domain beauty and personal care product retrieval to finetune the convolutional neural network (CNN). More specifically, UEL utilizes the non-parametric softmax to train the CNN model as instance-level classification, which reduces the influence of some inevitable problems (e.g., shape variations). In order to obtain better performance, we integrate a few existing retrieval methods trained on different datasets. Furthermore, a query expansion strategy (i.e., diffusion) is adopted to improve the performance. Extensive experiments conducted on a dataset including half million images of beauty and personal product items (Perfect-500K) manifest the effectiveness of our proposed method. Our approach achieves the 2nd place in the leader board of the Grand Challenge of AI Meets Beauty in ACM Multimedia 2019. Our code is available at: https://github.com/RetrainIt/Perfect-Half-Million-Beauty-Product-Image-Recognition-Challenge-2019.
引用
收藏
页码:2543 / 2547
页数:5
相关论文
共 50 条
  • [31] Discrimination and structure preserved cross-domain subspace learning for unsupervised domain adaption
    Tao Y.
    Yang N.
    Guo T.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2022, 49 (04): : 90 - 99+117
  • [32] Unsupervised domain adaptation by cross-domain consistency learning for CT body composition
    Ali, Shahzad
    Lee, Yu Rim
    Park, Soo Young
    Tak, Won Young
    Jung, Soon Ki
    MACHINE VISION AND APPLICATIONS, 2025, 36 (01)
  • [33] Cross-modal & Cross-domain Learning for Unsupervised LiDAR Semantic Segmentation
    Chen, Yiyang
    Zhao, Shanshan
    Ding, Changxing
    Tang, Liyao
    Wang, Chaoyue
    Tao, Dacheng
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 3866 - 3875
  • [34] Unsupervised Cross-Domain Rumor Detection with Contrastive Learning and Cross-Attention
    Ran, Hongyan
    Jia, Caiyan
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 13510 - 13518
  • [35] Deep Metric Learning for Cross-Domain Fashion Instance Retrieval
    Ibrahimi, Sarah
    van Noord, Nanne
    Geradts, Zeno
    Worring, Marcel
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 3165 - 3168
  • [36] Mutual Information-Based Word Embedding for Unsupervised Cross-Domain Sentiment Classification
    Liang, Junge
    Ma, Lei
    Xiong, Xin
    Shao, Dangguo
    Xiang, Yan
    Wang, Xiongbing
    2019 IEEE 4TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND BIG DATA ANALYSIS (ICCCBDA), 2019, : 625 - 628
  • [37] Deep Bi-directional Cross-triplet Embedding for Cross-Domain Clothing Retrieval
    Jiang, Shuhui
    Wu, Yue
    Fu, Yun
    MM'16: PROCEEDINGS OF THE 2016 ACM MULTIMEDIA CONFERENCE, 2016, : 52 - 56
  • [38] Unsupervised content and style learning for multimodal cross-domain image translation
    Lin, Zhijie
    Chen, Jingjing
    Ma, Xiaolong
    Li, Chao
    Zhang, Huiming
    Zhao, Lei
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [39] Unsupervised Cross-domain Learning by Interaction Information Co-clustering
    Ando, Shin
    Suzuki, Einoshin
    ICDM 2008: EIGHTH IEEE INTERNATIONAL CONFERENCE ON DATA MINING, PROCEEDINGS, 2008, : 13 - +
  • [40] Unsupervised Transfer Components Learning for Cross-Domain Speech Emotion Recognition
    Jiang, Shenjie
    Song, Peng
    Li, Shaokai
    Zhao, Keke
    Zheng, Wenming
    INTERSPEECH 2023, 2023, : 4538 - 4542