Modeling Selective Feature Attention for Lightweight Text Matching

被引:0
|
作者
Zang, Jianxiang [1 ]
Liu, Hui [1 ]
机构
[1] Shanghai Univ Int Business & Econ, Sch Stat & Informat, Shanghai, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Representation-based Siamese networks have risen to popularity in lightweight text matching due to their low deployment and inference costs. While word-level attention mechanisms have been implemented within Siamese networks to improve performance, we propose Feature Attention (FA), a novel downstream block designed to enrich the modeling of dependencies among embedding features. Employing "squeeze-and-excitation" techniques, the FA block dynamically adjusts the emphasis on individual features, enabling the network to concentrate more on features that significantly contribute to the final classification. Building upon FA, we introduce a dynamic "selection" mechanism called Selective Feature Attention (SFA), which leverages a stacked BiGRU Inception structure. The SFA block facilitates multi-scale semantic extraction by traversing different stacked BiGRU layers, encouraging the network to selectively concentrate on semantic information and embedding features across varying levels of abstraction. Both the FA and SFA blocks offer a seamless integration capability with various Siamese networks, showcasing a plug-and-play characteristic. Experimental evaluations conducted across diverse text matching baselines and benchmarks underscore the indispensability of modeling feature attention and the superiority of the "selection" mechanism.
引用
收藏
页码:6624 / 6632
页数:9
相关论文
共 50 条
  • [31] Multiscale Feature Fusion Attention Lightweight Facial Expression Recognition
    Ni, Jinyuan
    Zhang, Xinyue
    Zhang, Jianxun
    INTERNATIONAL JOURNAL OF AEROSPACE ENGINEERING, 2022, 2022
  • [32] A lightweight feature attention fusion network for pavement crack segmentation
    Huang, Yucheng
    Liu, Yuchen
    Liu, Fang
    Liu, Wei
    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2024, 39 (18) : 2811 - 2825
  • [33] Lightweight Semantic Segmentation Network based on Attention Feature Fusion
    Kuang, Xianyan
    Liu, Ping
    Chen, Yixi
    Zhang, Jianhua
    ENGINEERING LETTERS, 2023, 31 (04) : 1584 - 1591
  • [34] Lightweight single image dehazing network with residual feature attention
    Bai, Yingshuang
    Li, Huiming
    Leng, Jing
    Luan, Yaqing
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (01)
  • [35] Hierarchical Feature Fusion With Text Attention For Multi-scale Text Detection
    Liu, Chao
    Zou, Yuexian
    Guan, Wenjie
    2018 IEEE 23RD INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2018,
  • [36] SELECTIVE ATTENTION TO AN ITEM IS STORED AS A FEATURE OF THE ITEM
    SPERLING, G
    WURST, SA
    BULLETIN OF THE PSYCHONOMIC SOCIETY, 1991, 29 (06) : 473 - 473
  • [37] ARE FEATURE-SELECTIVE AND SPATIAL ATTENTION INDEPENDENT?
    Andersen, Soren K.
    Hillyard, Steven A.
    Mueller, Matthias M.
    PSYCHOPHYSIOLOGY, 2009, 46 : S118 - S118
  • [38] Are feature-selective and spatial attention independent?
    Andersen, S. K.
    Hillyard, S. A.
    Mueller, M. M.
    PERCEPTION, 2009, 38 : 89 - 89
  • [39] Weight estimation for feature integration and saliency region extraction in modeling computation of visual selective attention
    Liu, Qiong
    Qin, Shi-Yin
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2011, 24 (04): : 548 - 554
  • [40] Guided Graph Attention Learning for Video-Text Matching
    Li, Kunpeng
    Liu, Chang
    Stopa, Mike
    Amano, Jun
    Fu, Yun
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (02)