Modeling Selective Feature Attention for Lightweight Text Matching

被引:0
|
作者
Zang, Jianxiang [1 ]
Liu, Hui [1 ]
机构
[1] Shanghai Univ Int Business & Econ, Sch Stat & Informat, Shanghai, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Representation-based Siamese networks have risen to popularity in lightweight text matching due to their low deployment and inference costs. While word-level attention mechanisms have been implemented within Siamese networks to improve performance, we propose Feature Attention (FA), a novel downstream block designed to enrich the modeling of dependencies among embedding features. Employing "squeeze-and-excitation" techniques, the FA block dynamically adjusts the emphasis on individual features, enabling the network to concentrate more on features that significantly contribute to the final classification. Building upon FA, we introduce a dynamic "selection" mechanism called Selective Feature Attention (SFA), which leverages a stacked BiGRU Inception structure. The SFA block facilitates multi-scale semantic extraction by traversing different stacked BiGRU layers, encouraging the network to selectively concentrate on semantic information and embedding features across varying levels of abstraction. Both the FA and SFA blocks offer a seamless integration capability with various Siamese networks, showcasing a plug-and-play characteristic. Experimental evaluations conducted across diverse text matching baselines and benchmarks underscore the indispensability of modeling feature attention and the superiority of the "selection" mechanism.
引用
收藏
页码:6624 / 6632
页数:9
相关论文
共 50 条
  • [1] CMMCAN: Lightweight Feature Extraction and Matching Network for Endoscopic Images Based on Adaptive Attention
    Chong, Nannan
    Yang, Fan
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 80 (02): : 2761 - 2783
  • [2] Adversarial Feature Matching for Text Generation
    Zhang, Yizhe
    Gan, Zhe
    Fan, Kai
    Chen, Zhi
    Henao, Ricardo
    Shen, Dinghan
    Carin, Lawrence
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [3] LightSGM: Local feature matching with lightweight seeded
    Feng, Shuai
    Qian, Huaming
    Wang, Huilin
    Wang, Wenna
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (06)
  • [4] Modeling the dynamics of feature binding during object-selective attention
    Rothenstein, Albert L.
    Tsotsos, John K.
    ATTENTION IN COGNITIVE SYSTEMS: THEORIES AND SYSTEMS FROM AN INTERDISCIPLINARY VIEWPOINT, 2007, 4840 : 325 - 337
  • [5] Text-guided floral image generation based on lightweight deep attention feature fusion GAN
    Yang, Wenji
    An, Hang
    Hu, Wenchao
    Ma, Xinxin
    Xie, Liping
    VISUAL COMPUTER, 2024, : 3519 - 3535
  • [6] Interactive Attention Networks for Semantic Text Matching
    Zhao, Sendong
    Huang, Yong
    Su, Chang
    Li, Yuantong
    Wang, Fei
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 861 - 870
  • [7] Feature Differentiation and Fusion for Semantic Text Matching
    Peng, Rui
    Hong, Yu
    Jin, Zhiling
    Yao, Jianmin
    Zhou, Guodong
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2023, PT II, 2023, 13981 : 32 - 46
  • [8] PROBABILITY MATCHING IN VISUAL SELECTIVE ATTENTION
    VANDERHEIJDEN, AHC
    CANADIAN JOURNAL OF PSYCHOLOGY-REVUE CANADIENNE DE PSYCHOLOGIE, 1989, 43 (01): : 45 - 52
  • [9] MatchFormer: Interleaving Attention in Transformers for Feature Matching
    Wang, Qing
    Zhang, Jiaming
    Yang, Kailun
    Peng, Kunyu
    Stiefelhagen, Rainer
    COMPUTER VISION - ACCV 2022, PT III, 2023, 13843 : 256 - 273
  • [10] ResMatch: Residual Attention Learning for Feature Matching
    Deng, Yuxin
    Zhang, Kaining
    Zhang, Shihua
    Li, Yansheng
    Ma, Jiayi
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 1501 - 1509