Modeling Selective Feature Attention for Lightweight Text Matching

被引:0
|
作者
Zang, Jianxiang [1 ]
Liu, Hui [1 ]
机构
[1] Shanghai Univ Int Business & Econ, Sch Stat & Informat, Shanghai, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Representation-based Siamese networks have risen to popularity in lightweight text matching due to their low deployment and inference costs. While word-level attention mechanisms have been implemented within Siamese networks to improve performance, we propose Feature Attention (FA), a novel downstream block designed to enrich the modeling of dependencies among embedding features. Employing "squeeze-and-excitation" techniques, the FA block dynamically adjusts the emphasis on individual features, enabling the network to concentrate more on features that significantly contribute to the final classification. Building upon FA, we introduce a dynamic "selection" mechanism called Selective Feature Attention (SFA), which leverages a stacked BiGRU Inception structure. The SFA block facilitates multi-scale semantic extraction by traversing different stacked BiGRU layers, encouraging the network to selectively concentrate on semantic information and embedding features across varying levels of abstraction. Both the FA and SFA blocks offer a seamless integration capability with various Siamese networks, showcasing a plug-and-play characteristic. Experimental evaluations conducted across diverse text matching baselines and benchmarks underscore the indispensability of modeling feature attention and the superiority of the "selection" mechanism.
引用
收藏
页码:6624 / 6632
页数:9
相关论文
共 50 条
  • [21] A Selective Attention Template Matching neural Network
    Ye, XY
    Qi, FH
    Yin, HJ
    1997 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT PROCESSING SYSTEMS, VOLS 1 & 2, 1997, : 507 - 511
  • [22] MODELING - SELECTIVE ATTENTION INSTITUTIONALIZED
    KOREISHA, S
    STOBAUGH, R
    TECHNOLOGY REVIEW, 1981, 83 (04): : 64 - 66
  • [23] ParaFormer: Parallel Attention Transformer for Efficient Feature Matching
    Lu, Xiaoyong
    Yan, Yaping
    Kang, Bin
    Du, Songlin
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, : 1853 - 1860
  • [24] Simultaneous Deep Stereo Matching and Dehazing with Feature Attention
    Taeyong Song
    Youngjung Kim
    Changjae Oh
    Hyunsung Jang
    Namkoo Ha
    Kwanghoon Sohn
    International Journal of Computer Vision, 2020, 128 : 799 - 817
  • [25] Simultaneous Deep Stereo Matching and Dehazing with Feature Attention
    Song, Taeyong
    Kim, Youngjung
    Oh, Changjae
    Jang, Hyunsung
    Ha, Namkoo
    Sohn, Kwanghoon
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (04) : 799 - 817
  • [26] Transformer With Linear-Window Attention for Feature Matching
    Shen, Zhiwei
    Kong, Bin
    Dong, Xiaoyu
    IEEE ACCESS, 2023, 11 : 121202 - 121211
  • [27] Learning for Feature Matching via Graph Context Attention
    Guo, Junwen
    Xiao, Guobao
    Tang, Zhimin
    Chen, Shunxing
    Wang, Shiping
    Ma, Jiayi
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [28] Text Feature Extraction and Selection Based on Attention Mechanism
    Ma, Longxuan
    Zhang, Lei
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2019, PT II, 2019, 11440 : 615 - 627
  • [29] Feature attention based detection model for medical text
    Xie, Qubo
    Zhou, Ke
    Fu, Xiao
    Fan, Xiaohu
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2019, 37 (04) : 4585 - 4594
  • [30] Attention Guided Feature Encoding for Scene Text Recognition
    Hassan, Ehtesham
    Lekshmi, V. L.
    JOURNAL OF IMAGING, 2022, 8 (10)