Enhance fashion classification of mosquito vector species via self-supervised vision transformer

被引:0
|
作者
Kittichai, Veerayuth [1 ]
Kaewthamasorn, Morakot [2 ]
Chaiphongpachara, Tanawat [3 ]
Laojun, Sedthapong [3 ]
Saiwichai, Tawee [4 ]
Naing, Kaung Myat [6 ]
Tongloy, Teerawat [6 ]
Boonsang, Siridech [5 ]
Chuwongin, Santhad [6 ]
机构
[1] King Mongkuts Inst Technol Ladkrabang, Fac Med, Bangkok, Thailand
[2] Chulalongkorn Univ, Fac Vet Sci, Vet Parasitol Res Unit, Bangkok, Thailand
[3] Suan Sunandha Rajabhat Univ, Coll Allied Hlth Sci, Dept Publ Hlth & Hlth Promot, Bangkok, Thailand
[4] Mahidol Univ, Fac Publ Hlth, Dept Parasitol & Entomol, Nakhon Pathom, Thailand
[5] King Mongkuts Inst Technol Ladkrabang, Sch Engn, Dept Elect Engn, Bangkok, Thailand
[6] King Mongkuts Inst Technol Ladkrabang, Coll Adv Mfg Innovat, Bangkok, Thailand
来源
SCIENTIFIC REPORTS | 2024年 / 14卷 / 01期
关键词
Mosquito vector species; Artificial intelligence; Self-distillation with unlabeled data; Mobile phone application;
D O I
10.1038/s41598-024-83358-8
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Vector-borne diseases pose a major worldwide health concern, impacting more than 1 billion people globally. Among various blood-feeding arthropods, mosquitoes stand out as the primary carriers of diseases significant in both medical and veterinary fields. Hence, comprehending their distinct role fulfilled by different mosquito types is crucial for efficiently addressing and enhancing control measures against mosquito-transmitted diseases. The conventional method for identifying mosquito species is laborious and requires significant effort to learn. Classification is subsequently carried out by skilled laboratory personnel, rendering the process inherently time-intensive and restricting the task to entomology specialists. Therefore, integrating artificial intelligence with standard taxonomy, such as molecular techniques, is essential for accurate mosquito species identification. Advancement in novel tools with artificial intelligence has challenged the task of developing an automated system for sample collection and identification. This study aims to introduce a self-supervised Vision Transformer supporting an automatic model for classifying mosquitoes found across various regions of Thailand. The objective is to utilize self-distillation with unlabeled data (DINOv2) to develop models on a mobile phone-captured dataset containing 16 species of female mosquitoes, including those known for transmitting malaria and dengue. The DINOv2 model surpassed the ViT baseline model in precision and recall for all mosquito species. When compared on a species-specific level, utilizing the DINOv2 model resulted in reductions in false negatives and false positives, along with enhancements in precision and recall values, in contrast to the baseline model, across all mosquito species. Notably, at least 10 classes exhibited outstanding performance, achieving above precision and recall rates exceeding 90%. Remarkably, when applying cropping techniques to the dataset instead of utilizing the original photographs, there was a significant improvement in performance across all DINOv2 models studied. This is demonstrated by an increase in recall to 87.86%, precision to 91.71%, F1 score to 88.71%, and accuracy to 98.45%, respectively. Malaria mosquito species can be easily distinguished from another genus like Aedes, Mansonia, Armigeres, and Culex, respectively. While classifying malaria vector species presented challenges for the DINOv2 model, utilizing the cropped images enhanced precision by up to 96% for identifying one of the top three malaria vectors in Thailand, Anopheles minimus. A proficiently trained DINOv2 model, coupled with effective data management, can contribute to the development of a mobile phone application. Furthermore, this method shows promise in supporting field professionals who are not entomology experts in effectively addressing pathogens responsible for diseases transmitted by female mosquitoes.
引用
收藏
页数:16
相关论文
共 50 条
  • [21] Self-Supervised AcousticWord Embedding Learning via Correspondence Transformer Encoder
    Lin, Jingru
    Yue, Xianghu
    Ao, Junyi
    Li, Haizhou
    INTERSPEECH 2023, 2023, : 2988 - 2992
  • [22] Self-Supervised Pretraining via Multimodality Images With Transformer for Change Detection
    Zhang, Yuxiang
    Zhao, Yang
    Dong, Yanni
    Du, Bo
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [23] Self-Supervised Classification Network
    Amrani, Elad
    Karlinsky, Leonid
    Bronstein, Alex
    COMPUTER VISION, ECCV 2022, PT XXXI, 2022, 13691 : 116 - 132
  • [24] Pseudo-label enhancement for weakly supervised object detection using self-supervised vision transformer
    Yang, Kequan
    Wu, Yuanchen
    Li, Jide
    Yin, Chao
    Li, Xiaoqiang
    KNOWLEDGE-BASED SYSTEMS, 2025, 311
  • [25] MAT-VIT:A Vision Transformer with MAE-Based Self-Supervised Auxiliary Task for Medical Image Classification
    Han, Yufei
    Chen, Haoyuan
    Yao, Linwei
    Li, Kuan
    Yin, Jianping
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 2040 - 2046
  • [26] Self-Supervised Vision for Climate Downscaling
    Singh, Karandeep
    Jeong, Chaeyoon
    Shidqi, Naufal
    Park, Sungwon
    Nellikkatti, Arjun
    Zeller, Elke
    Cha, Meeyoung
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 7456 - 7464
  • [27] SWIN transformer based contrastive self-supervised learning for animal detection and classification
    Agilandeeswari, L.
    Meena, S. Divya
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (07) : 10445 - 10470
  • [28] TransPath: Transformer-Based Self-supervised Learning for Histopathological Image Classification
    Wang, Xiyue
    Yang, Sen
    Zhang, Jun
    Wang, Minghui
    Zhang, Jing
    Huang, Junzhou
    Yang, Wei
    Han, Xiao
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT VIII, 2021, 12908 : 186 - 195
  • [29] SWIN transformer based contrastive self-supervised learning for animal detection and classification
    L. Agilandeeswari
    S. Divya Meena
    Multimedia Tools and Applications, 2023, 82 : 10445 - 10470
  • [30] A self-supervised vision transformer to predict survival from histopathology in renal cell carcinoma
    Wessels, Frederik
    Schmitt, Max
    Krieghoff-Henning, Eva
    Nientiedt, Malin
    Waldbillig, Frank
    Neuberger, Manuel
    Kriegmair, Maximilian C.
    Kowalewski, Karl-Friedrich
    Worst, Thomas S.
    Steeg, Matthias
    Popovic, Zoran V.
    Gaiser, Timo
    von Kalle, Christof
    Utikal, Jochen S.
    Frohling, Stefan
    Michel, Maurice S.
    Nuhn, Philipp
    Brinker, Titus J.
    WORLD JOURNAL OF UROLOGY, 2023, 41 (08) : 2233 - 2241