Multimodal fusion recognition for digital twin

被引:4
|
作者
Zhou, Tianzhe [1 ]
Zhang, Xuguang [1 ]
Kang, Bing [1 ]
Chen, Mingkai [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Key Lab Broadband Wireless Commun & Sensor Network, Minist Educ, Nanjing 210003, Peoples R China
关键词
Digital twin; Multimodal fusion; Object recognition; Deep learning; Transfer learning; CLASSIFICATION; NETWORKS; FEATURES;
D O I
10.1016/j.dcan.2022.10.009
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
The digital twin is the concept of transcending reality, which is the reverse feedback from the real physical space to the virtual digital space. People hold great prospects for this emerging technology. In order to realize the upgrading of the digital twin industrial chain, it is urgent to introduce more modalities, such as vision, haptics, hearing and smell, into the virtual digital space, which assists physical entities and virtual objects in creating a closer connection. Therefore, perceptual understanding and object recognition have become an urgent hot topic in the digital twin. Existing surface material classification schemes often achieve recognition through machine learning or deep learning in a single modality, ignoring the complementarity between multiple modalities. In order to overcome this dilemma, we propose a multimodal fusion network in our article that combines two modalities, visual and haptic, for surface material recognition. On the one hand, the network makes full use of the potential correlations between multiple modalities to deeply mine the modal semantics and complete the data mapping. On the other hand, the network is extensible and can be used as a universal architecture to include more modalities. Experiments show that the constructed multimodal fusion network can achieve 99.42% classification accuracy while reducing complexity.
引用
收藏
页码:337 / 346
页数:10
相关论文
共 50 条
  • [1] Multimodal fusion recognition for digital twin
    Tianzhe Zhou
    Xuguang Zhang
    Bing Kang
    Mingkai Chen
    Digital Communications and Networks, 2024, 10 (02) : 337 - 346
  • [2] Multimodal fusion for pattern recognition
    Khan, Zubair
    Kumar, Shishir
    Garcia Reyes, Edel B.
    Mahanti, Prabhat
    PATTERN RECOGNITION LETTERS, 2018, 115 : 1 - 3
  • [3] Multimodal data fusion for object recognition
    Knyaz, Vladimir
    MULTIMODAL SENSING: TECHNOLOGIES AND APPLICATIONS, 2019, 11059
  • [4] Fusion Mappings for Multimodal Affect Recognition
    Kaechele, Markus
    Schels, Martin
    Thiam, Patrick
    Schwenker, Friedhelm
    2015 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2015, : 307 - 313
  • [5] An effective multimodal representation and fusion method for multimodal intent recognition
    Huang, Xuejian
    Ma, Tinghuai
    Jia, Li
    Zhang, Yuanjian
    Rong, Huan
    Alnabhan, Najla
    NEUROCOMPUTING, 2023, 548
  • [6] Realtime Object Recognition Method Inspired by Multimodal Information Processing in the Brain for Distributed Digital Twin Systems
    Seki, Ryoga
    Kominami, Daichi
    Shimonishi, Hideyuki
    Murata, Masayuki
    Fujiwaka, Masaya
    2022 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING, IWCMC, 2022, : 913 - 918
  • [7] Multi-sensing node convolution fusion identity recognition algorithm for radio digital twin
    Wei G.
    Ding G.
    Jiao Y.
    Xu Y.
    Guo D.
    Tang P.
    Tongxin Xuebao/Journal on Communications, 2023, 44 (11): : 13 - 24
  • [8] A novel digital twin approach based on deep multimodal information fusion for aero-engine fault diagnosis
    Huang, Yufeng
    Tao, Jun
    Sun, Gang
    Wu, Tengyun
    Yu, Liling
    Zhao, Xinbin
    ENERGY, 2023, 270
  • [9] Multimodal Biometric Person Recognition by Feature Fusion
    Huang, Lin
    Yu, Chenxi
    Cao, Xinzhe
    2018 5TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2018), 2018, : 1158 - 1162
  • [10] Multimodal fusion for alzheimer's disease recognition
    Ying, Yangwei
    Yang, Tao
    Zhou, Hong
    APPLIED INTELLIGENCE, 2023, 53 (12) : 16029 - 16040