Token-Selective Vision Transformer for fine-grained image recognition of marine organisms

被引:8
|
作者
Si, Guangzhe [1 ]
Xiao, Ying [2 ]
Wei, Bin [3 ]
Bullock, Leon Bevan [4 ]
Wang, Yueyue [5 ]
Wang, Xiaodong [4 ]
机构
[1] Ocean Univ China, Coll Elect Engn, Qingdao, Shandong, Peoples R China
[2] Hong Kong Univ Sci & Technol, Sch Sci, Hong Kong, Peoples R China
[3] Qingdao Univ, Affiliated Hosp, Shandong Key Lab Digital Med & Comp Assisted Surg, Qingdao, Shandong, Peoples R China
[4] Ocean Univ China, Coll Comp Sci & Technol, Qingdao, Shandong, Peoples R China
[5] Ocean Univ China, Comp Ctr, Qingdao, Shandong, Peoples R China
基金
中国国家自然科学基金;
关键词
token-selective; self-attention; vision transformer; fine-grained image classification; marine organisms;
D O I
10.3389/fmars.2023.1174347
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
IntroductionThe objective of fine-grained image classification on marine organisms is to distinguish the subtle variations in the organisms so as to accurately classify them into subcategories. The key to accurate classification is to locate the distinguishing feature regions, such as the fish's eye, fins, or tail, etc. Images of marine organisms are hard to work with as they are often taken from multiple angles and contain different scenes, additionally they usually have complex backgrounds and often contain human or other distractions, all of which makes it difficult to focus on the marine organism itself and identify its most distinctive features. Related workMost existing fine-grained image classification methods based on Convolutional Neural Networks (CNN) cannot accurately enough locate the distinguishing feature regions, and the identified regions also contain a large amount of background data. Vision Transformer (ViT) has strong global information capturing abilities and gives strong performances in traditional classification tasks. The core of ViT, is a Multi-Head Self-Attention mechanism (MSA) which first establishes a connection between different patch tokens in a pair of images, then combines all the information of the tokens for classification. MethodsHowever, not all tokens are conducive to fine-grained classification, many of them contain extraneous data (noise). We hope to eliminate the influence of interfering tokens such as background data on the identification of marine organisms, and then gradually narrow down the local feature area to accurately determine the distinctive features. To this end, this paper put forwards a novel Transformer-based framework, namely Token-Selective Vision Transformer (TSVT), in which the Token-Selective Self-Attention (TSSA) is proposed to select the discriminating important tokens for attention computation which helps limits the attention to more precise local regions. TSSA is applied to different layers, and the number of selected tokens in each layer decreases on the basis of the previous layer, this method gradually locates the distinguishing regions in a hierarchical manner. ResultsThe effectiveness of TSVT is verified on three marine organism datasets and it is demonstrated that TSVT can achieve the state-of-the-art performance.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Convolutional transformer network for fine-grained action recognition
    Ma, Yujun
    Wang, Ruili
    Zong, Ming
    Ji, Wanting
    Wang, Yi
    Ye, Baoliu
    NEUROCOMPUTING, 2024, 569
  • [22] Multimodal Fine-Grained Transformer Model for Pest Recognition
    Zhang, Yinshuo
    Chen, Lei
    Yuan, Yuan
    ELECTRONICS, 2023, 12 (12)
  • [23] Selective Pooling Vector for Fine-grained Recognition
    Chen, Guang
    Yang, Jianchao
    Jin, Hailin
    Shechtman, Eli
    Brandt, Jonathan
    Han, Tony X.
    2015 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2015, : 860 - 867
  • [24] Learning to locate for fine-grained image recognition
    Chen, Jiamin
    Hu, Jianguo
    Li, Shiren
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2021, 206
  • [25] Incremental Learning for Fine-Grained Image Recognition
    Cao, Liangliang
    Hsiao, Jenhao
    de Juan, Paloma
    Li, Yuncheng
    Thomee, Bart
    ICMR'16: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2016, : 363 - 366
  • [26] Hierarchical attention vision transformer for fine-grained visual classification
    Hu, Xiaobin
    Zhu, Shining
    Peng, Taile
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 91
  • [27] Fine-grained visual clasificatio based on compct Vision transformer
    Xu H.
    Guo L.
    Li R.-Z.
    Kongzhi yu Juece/Control and Decision, 2024, 39 (03): : 893 - 900
  • [28] Recombining Vision Transformer Architecture for Fine-Grained Visual Categorization
    Deng, Xuran
    Liu, Chuanbin
    Lu, Zhiying
    MULTIMEDIA MODELING, MMM 2023, PT II, 2023, 13834 : 127 - 138
  • [29] Fine-Grained Visual Prompt Learning of Vision-Language Models for Image Recognition
    Sun, Hongbo
    He, Xiangteng
    Zhou, Jiahuan
    Peng, Yuxin
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5828 - 5836
  • [30] SwinFG: A fine-grained recognition scheme based on swin transformer
    Ma, Zhipeng
    Wu, Xiaoyu
    Chu, Anzhuo
    Huang, Lei
    Wei, Zhiqiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 244