Learning Consistent Global-Local Representation for Cross-Domain Facial Expression Recognition

被引:4
|
作者
Xie, Yuhao [1 ]
Gao, Yuefang [1 ]
Lin, Jiantao [2 ]
Chen, Tianshui [3 ]
机构
[1] South China Agr Univ, Coll Math & Informat, Guangzhou Key Lab Intelligent Agr, Guangzhou, Peoples R China
[2] Jinan Univ, Sch Intelligent Syst Sci & Engn, Zhuhai, Peoples R China
[3] Guangdong Univ Technol, Sch Informat Engn, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICPR56361.2022.9956069
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Domain shift is one of the knotty problems that seriously restricts the accuracy of cross-domain facial expression recognition. Most existing works mainly focus on learning domain-invariant features by global feature adaption, and little works are conducted using the local features which are more transferable across different domains. In this paper, a consistent global-local feature and semantic learning framework is proposed which can learn domain-invariant global and local feature representation, and generate pseudo labels to facilitate cross-domain facial expression recognition. Specifically, the proposed method first simultaneously learns the domain-invariant global and local features via separately adversarial global and local learning. Once those features are acquired, a global and local semantic consistency is introduced to help generate pseudo labels for unlabeled data of the target dataset. By performing such strategy, more efficiency pseudo labels with high accuracy are produced due to the information diversity in global-local features and do without the image transformation. We conduct extensive experiments and analyses on several public datasets to demonstrate the effectiveness of the proposed model.
引用
收藏
页码:2489 / 2495
页数:7
相关论文
共 50 条
  • [41] Cross-view action recognition by cross-domain learning
    Nie, Weizhi
    Liu, Anan
    Li, Wenhui
    Su, Yuting
    IMAGE AND VISION COMPUTING, 2016, 55 : 109 - 118
  • [42] Domain-consistent syntactic representation for cross-domain aspect sentiment triplet extraction
    Wang, Guangjin
    Wang, Bao
    Xu, Fuyong
    Wang, Ru
    Zhu, Zhenfang
    Liu, Peiyu
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 256
  • [43] GLPose: Global-Local Representation Learning for Human Pose Estimation
    Jiao, Yingying
    Chen, Haipeng
    Feng, Runyang
    Chen, Haoming
    Wu, Sifan
    Yin, Yifang
    Liu, Zhenguang
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (02)
  • [44] POST: Prototype-oriented similarity transfer framework for cross-domain facial expression recognition
    Guo, Zhe
    Wei, Bingxin
    Cai, Qinglin
    Liu, Jiayi
    Wang, Yi
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2024, 35 (03)
  • [45] USTST: unsupervised self-training similarity transfer for cross-domain facial expression recognition
    Guo Z.
    Wei B.
    Liu J.
    Liu X.
    Zhang Z.
    Wang Y.
    Multimedia Tools and Applications, 2024, 83 (14) : 41703 - 41723
  • [46] Cross-domain facial expression recognition: Bi-Directional Fusion of Active and Stable Information
    Zhu, Yanan
    Ai, Jiaqiu
    Xue, Weibao
    Wu, Mingyang
    Yang, Sen
    Jia, Wei
    Hu, Min
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 149
  • [47] LEARNING ASSOCIATIVE REPRESENTATION FOR FACIAL EXPRESSION RECOGNITION
    Du, Yangtao
    Yang, Dingkang
    Zhai, Peng
    Li, Mingchen
    Zhang, Lihua
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 889 - 893
  • [48] Cross-Domain Transfer Learning for Complex hmotion Recognition
    Nagarajan, Bhalaji
    Oruganti, V. Ramana Murthy
    PROCEEDINGS OF 2019 IEEE REGION 10 SYMPOSIUM (TENSYMP), 2019, : 649 - 653
  • [49] Adversarial transfer learning for cross-domain visual recognition
    Wang, Shanshan
    Zhang, Lei
    Fu, Jingru
    KNOWLEDGE-BASED SYSTEMS, 2020, 204
  • [50] Cross-Domain Face Recognition Using Dictionary Learning
    Gavini, Y.
    Agarwal, Arun
    Mehtre, B. M.
    MULTI-DISCIPLINARY TRENDS IN ARTIFICIAL INTELLIGENCE, 2019, 11909 : 168 - 180