Twinned attention network for occlusion-aware facial expression recognition

被引:0
|
作者
Devasena, G. [1 ]
Vidhya, V. [1 ]
机构
[1] Indian Inst Informat Technol, Dept Comp Sci & Engn, Tiruchirappalli, Tamilnadu, India
关键词
Facial expression recognition; Occluded images; Attention mechanism; REPRESENTATION; FEATURES;
D O I
10.1007/s00138-024-01641-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Facial expression recognition (FER) is a tedious task in image processing for complex real-world scenarios that are captured under different lighting conditions, facial obstructions, and a diverse range of facial orientations. To address this issue, a novel Twinned attention network (Twinned-Att) is proposed in this paper for an efficient FER in occluded images. The proposed Twinned-Att network is designed in two separate modules: Holistic module (HM) and landmark centric module (LCM). The holistic module comprises of dual coordinate attention block (Dual-CA) and the Cross Convolution block (Cross-conv). The Dual-CA block is essential for learning positional, spatial, and contextual information by highlighting the most prominent characteristics in the facial regions. The Cross-conv block learns the spatial inter-dependencies and correlations to identify complex relationships between various facial regions. The LCM emphasizes smaller and distinct local regions while maintaining resilience against occlusions. Vigorous experiments have been undertaken to improve the efficacy of the proposed Twinned-Att. The results produced by the Twinned-Att illustrate the remarkable responses which achieve the accuracies of 86.92%, 85.64%, 78.40%, 69.82%, 64.71%, 85.52%, and 85.83% for the datasets viz., RAF DB, FER PLUS, FER 2013, FED RO, SFEW 2.0, occluded RAF DB and occluded FER Plus respectively. The proposed Twinned-Att network is experimented with various backbone networks, including Resnet-18, Resnet-50, and Resnet-152. It consistently outperforms well and highlights its prowess in addressing the challenges of robust FER in the images captured in complex real-world environments.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Occlusion-Aware Stereo Matching
    Jintao Xu
    Qingxiong Yang
    Zuren Feng
    International Journal of Computer Vision, 2016, 120 : 256 - 271
  • [32] Collaborative Attention Transformer on facial expression recognition under partial occlusion
    Luo, Yan
    Shao, Jie
    Yang, Runxia
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (02)
  • [33] Patch attention convolutional vision transformer for facial expression recognition with occlusion
    Liu, Chang
    Hirota, Kaoru
    Dai, Yaping
    INFORMATION SCIENCES, 2023, 619 : 781 - 794
  • [34] Region Attention Networks for Pose and Occlusion Robust Facial Expression Recognition
    Wang, Kai
    Peng, Xiaojiang
    Yang, Jianfei
    Meng, Debin
    Qiao, Yu
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 (29) : 4057 - 4069
  • [35] Occlusion-Aware Human Mesh Model-Based Gait Recognition
    Xu, Chi
    Makihara, Yasushi
    Li, Xiang
    Yagi, Yasushi
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 1309 - 1321
  • [36] Occlusion-Aware Video Object Inpainting
    Ke, Lei
    Tai, Yu-Wing
    Tang, Chi-Keung
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14448 - 14458
  • [37] ACCURATE LIGHT FIELD DEPTH ESTIMATION VIA AN OCCLUSION-AWARE NETWORK
    Guo, Chunle
    Jin, Jing
    Hou, Junhui
    Chen, Jie
    2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [38] Occlusion-aware Video Temporal Consistency
    Yao, Chun-Han
    Chang, Chia-Yang
    Chien, Shao-Yi
    PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, : 777 - 785
  • [39] Occlusion-Aware Motion Planning at Roundabouts
    Debada, Ezequiel
    Ung, Adeline
    Gillet, Denis
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2021, 6 (02): : 276 - 287
  • [40] Facial Expression Recognition Based on Region Enhanced Attention Network
    Gongguan C.
    Fan Z.
    Hua W.
    Hui F.
    Caiming Z.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2024, 36 (01): : 152 - 160